When we get sick, it feels natural to try to speed up our recovery by getting some extra shuteye. New research with fruit flies suggests it’s a good idea.
“It’s an intuitive response to want to sleep when you get sick,” says Julie A. Williams, research associate at the Center for Sleep and Circadian Neurobiology at University of Pennsylvania. “Many studies have used sleep deprivation as a means to understand how sleep contributes to recovery, if it does at all, but there is surprisingly little experimental evidence that supports the notion that more sleep helps us to recover. We used a fruitfly model to answer these questions.”Related Articles On Futurity
- University of PennsylvaniaProbiotics prime immune system to fight
- Monash UniversityEarly HIV therapy boosts immune response
- McGill UniversitySleep, mental problems for wives of deployed
Williams and postdoctoral fellow, Tzu-Hsing Kuo, conducted two related studies to directly examine the effects of sleep on recovery from and survival after an infection. Their findings appear online in two related papers in the journal SLEEP.
In the first paper, the researchers took a conventional approach by subjecting fruit flies to sleep deprivation before infecting them with either Serratia marcescens or Pseudomonas aeruginosa bacteria. Both the sleep-deprived flies and a non-sleep-deprived control group displayed increased sleep after infection—what the experimenters call an “acute sleep response.”
Unexpectedly, the pre-infection, sleep-deprived flies had a better survival rate.
“To our surprise they actually survived longer after the infection than the ones who were not sleep-deprived,” Williams says. Prior deprivation made the flies sleep for a longer period after infection as compared to the undisturbed controls. They slept longer and they lived longer during the infection.
Inducing sleep deprivation after infection rather than before made little difference, as long as the infected flies then got adequate recovery sleep. “We deprived flies of sleep after infection with the idea that if we blocked this sleep, things would get worse in terms of survival,” Williams says. “Instead they got better, but not until after they had experienced more sleep.”Post-infection sleep
Sleep deprivation increases activity of an NFkB transcription factor called Relish, which is also needed for fighting infection. Flies without the Relish gene do not experience an acute sleep response and very quickly succumb to infection. But, when these mutants are sleep-deprived before infection, they displayed increased sleep and survival rates after infection.
The team then evaluated mutant flies that lacked two varieties of NFkB (Relish and Dif). When flies lacked both types of NFkB genes, sleep deprivation had no effect on the acute sleep response, and the effect on survival was abolished. Flies from both sleep-deprived and undisturbed groups succumbed to infection at equal rates within hours.
“Taken together, all of these data support the idea that post-infection sleep helps to improve survival,” Williams says.
In the second study, the researchers manipulated sleep through a genetic approach. They used the drug RU486 to induce expression of ion channels to alter neuronal activity in the mushroom body of the fly brain, and thereby regulate sleep patterns. Compared to a control group, flies that were induced to sleep more, and for longer periods of time for up to two days before infection, showed substantially greater survival rates.Get more Zzzzs
The flies with more sleep also showed faster and more efficient rates of clearing the bacteria from their bodies. “Again, increased sleep somehow helps to facilitate the immune response by increasing resistance to infection and survival after infection,” Williams says.
Because the genetic factors investigated by the researchers, such as the NFkB pathway, are preserved in mammals, the relative simplicity of the Drosophila model provides an ideal avenue to explore basic functions like sleep.
“Investigators have been working on questions about sleep and immunity for more than 40 years, but by narrowing down the questions in the fly we’re now in a good position to identify potentially novel genes and mechanisms that may be involved in this process that are difficult to see in higher animals,” Williams says.
“These studies provide new evidence of the direct and functional effects of sleep on immune response and of the underlying mechanisms at work. The take-home message from these papers is that when you get sick, you should sleep as much as you can—we now have the data that supports this idea.”
The National Science Foundation supported the work.
Source: University of Pennsylvania
We tend to think that a group decision is more likely to be accurate when there are more brains involved—but that might not be true in all situations.
Researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information.
The findings present a significant caveat to what is known about collective intelligence, or the “wisdom of crowds,” wherein individual observations—even if imperfect—coalesce into a single, accurate group decision.Related Articles On Futurity
- University at BuffaloWhy bullies succeed on the job
- University of MinnesotaFor some, salary boost feels like failure
- Rice UniversityDoes being pregnant cost women jobs?
A classic example of crowd wisdom is English statistician Sir Francis Galton’s 1907 observation of a contest in which villagers attempted to guess the weight of an ox. Although not one of the 787 estimates was correct, the average of the guessed weights was just one pound short of the animal’s recorded heft. Along those lines, the consensus has been that group decisions are enhanced as more individuals have input.
But collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the researchers report in the journal Proceedings of the Royal Society B.
In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explains senior author Iain Couzin, a professor of ecology and evolutionary biology at Princeton University.
“This is an extension of the wisdom-of-crowds theory that allows us to relax the assumption that being in big groups is always the best way to make a decision,” Couzin says.
“It’s a starting point that opens up the possibility of capturing collective decision-making in a more realistic environment,” he says. “When we do see small groups of animals or organisms making decisions they are not necessarily compromising accuracy. They might actually do worse if more individuals were involved. I think that’s the new insight.”
Couzin and first author Albert Kao, a graduate student of ecology and evolutionary biology in Couzin’s group, created a theoretical model in which a “group” had to decide between two potential food sources.
The group’s decision accuracy was determined by how well individuals could use two types of information: one that was known to all members of the group—known as correlated information—and another that was perceived by only some individuals, or uncorrelated information.
The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.The benefits of ‘noise’
At work, Kao says, was the dynamic between correlated and uncorrelated cues. With more individuals, that which is known by all members comes to dominate the decision-making process. The uncorrelated information gets drowned out, even if individuals within the group are still well aware of it.
In smaller groups, on the other hand, the lesser-known cues nonetheless earn as much consideration as the more common information. This is due to the more random nature of small groups, which is known as “noise” and typically seen as an unwelcome distraction. Couzin and Kao, however, found that noise is surprisingly advantageous in these smaller arrangements.
“It’s surprising that noise can enhance the collective decision,” Kao says. “The typical assumption is that the larger the group, the greater the collective intelligence.
“We found that if you increase group size, you see the wisdom-of-crowds benefit, but if the group gets too large there is an over-reliance on high-correlation information,” he says. “You would find yourself in a situation where the group uses that information to the point that it dominates the group’s decision-making.”Who gets to make decisions?
None of this is to suggest that large groups would benefit from axing members, Couzin says. The size threshold he and Kao found corresponds with the number of individuals making the decisions, not the size of the group overall.
The researchers cite numerous studies—including many from Couzin’s lab—showing that decisions in animal groups such as schools of fish can often fall to a select few members. Thusly, these organisms can exhibit highly coordinated movements despite vast numbers of individuals. (Such hierarchies could help animals realize a dual benefit of efficient decision-making and defense via strength-in-numbers, Kao says.)
“What’s important is the number of individuals making the decision,” Couzin says. “Just looking at group size per se is not necessarily relevant. It depends on the number of individuals making the decision.”
The National Science Foundation, the Office of Naval Research, the Human Frontier Science Project, and the Army Research Office supported the research.
Source: Princeton University
New research finds that talking about social class helps first-generation college students reduce the social-class achievement gap by as much as 63 percent.
Using a “difference-education” approach, these students had higher grade-point averages and took better advantage of college resources than peers who didn’t participate in the discussion.
Research has shown that first-generation college students—those who do not have a parent with a college degree—often lag behind other students in grades and graduation rates. They also often struggle socially, finding it hard to fit in and sometimes feeling like they don’t belong in college.Related Articles On Futurity
- University of MissouriMixing up math class can boost test scores
- University of MichiganVerbally acute women avoid STEM careers
- University of California, BerkeleyTest prep bolsters brain connections
But the new study offers a new approach to help them advance in college: discuss class differences rather than ignore them.
“The research showed that when incoming first-generation students saw and heard stories from junior and senior students with different social-class backgrounds tell stories about their struggles and successes in college, they gained a framework to understand how their backgrounds shaped their own experiences and how to see this as an asset,” says MarYam Hamedani, a coauthor of the paper, psychologist, and associate director of Stanford University’s Center for Comparative Studies in Race and Ethnicity.
Continuing-generation students—those with at least one parent with a four-year college degree—don’t experience similar gaps in opportunity and achievement. Coming from families with more experience with the world of higher education helps them navigate college and the norms, rules, and expectations that are often implicit or unspoken, Hamedani adds.
While many colleges and universities have aggressively recruited more first-generation students, she says, the schools have not yet figured out how to get these students through college successfully. This has created “a paradox” that fuels, rather than mitigates, the growing inequality gap in society.‘Subtle’ discussion
In their study, which took place at a private Midwestern university, the researchers invited first-generation and continuing-generation students at the beginning of the school year to attend a one-hour program designed to help them transition to college.
Half of the students attended a “difference-education” program while the other half attended a “standard” program. They were not aware of the separate programs or their content.
In both settings, the freshman students listened to a diverse panel of junior and senior students talk about their transition to college, challenges they faced, and how they found success. In the difference-education program, however, panelists’ stories also included a subtle discussion of how their social-class backgrounds mattered in college. The panels included both first-generation and continuing-generation students.
For example, panelists in the difference-education group were asked, “Can you provide an example of an obstacle that you faced when you came to (university name) and how you resolved it?”
One first-generation panelist responded, “Because my parents didn’t go to college, they weren’t able to provide me the advice I needed. So it was sometimes hard to figure out which classes to take … I learned I needed to rely on my adviser more than other students.”
In the standard program, however, the panelists did not reveal their social class. Their stories consisted of a general discussion about college that was not linked to their social-class backgrounds.
For instance, one panelist was asked, “What do you do to be successful in your classes?” He answered, “Go to class and pay attention. If you don’t understand something or have a hard time with the material, meet with your teaching assistant or professor during office hours.”The impact of difference-education
At the end of the academic year, the researchers found that the first-generation students in the difference-education intervention had higher year-end grades than those in the standard group (3.4 vs. 3.16 average GPAs), and took greater advantage of academic resources like mentoring from professors (1.89 vs. 1.45 times that resources were sought out).
For continuing-generation students in the difference-education group, they posted 3.51 GPAs on average and sought resources 1.8 times over the course of the school year. In the standard model, those numbers were 3.46 and 2.18, respectively.
The researchers write, “Using the personal stories of senior college students, a one-hour difference-education intervention at the beginning of college reduced the social-class achievement gap among first-generation and continuing-generation college students by 63 percent at the end of the first year and also improved first-generation students’ college transition on numerous psychosocial outcomes (e.g. psychological adjustment and academic and social engagement).”
An added bonus was that both first- and continuing-generation students who participated in the difference-education program gained a deeper understanding of how students’ diverse backgrounds and perspectives mattered in college than did their peers in the standard program, according to the study.
Continuing-generation students in the difference-education program also experienced a smoother transition to college compared with their peers in the standard program.
“Both first and continuing-generation students experienced a more positive college transition,” Hamedani says. “They were less stressed, felt like they fit in socially, and were more connected to their families, friends, and school.”Beyond ‘bridge’ programs
Hamedani says the traditional approach in higher education is to help first-generation students with “bridge” programs that teach academic tips, tools, and strategies, such as how to choose a major or study for exams. While providing academic resources can help, they are not sufficient—students also need psychological resources to support them on their path to success.
“In American society,” she says, “we try not to talk about our class differences. We found, however, that college students can learn a lot about themselves and one another when they do so. Engaging students about differences, when done in the right way, can be extremely beneficial and empowering.”
Hamedani notes, “Higher education institutions have a responsibility to support and prepare students for success in our increasingly diverse and multicultural society.”
Coauthors on the research paper included management professor Nicole Stephens and psychology professor Mesmin Destin, both from Northwestern University.
Source: Stanford University
The post Talking about social class eases college achievement gap appeared first on Futurity.
A cyber buddy might just give exercise enthusiasts—and those who are less than “enthused”—the extra nudge they need during a workout, new research suggests.
The study, which appears in the Games for Health Journal, is the first to indicate that although a human partner is still a better motivator during exercise, a software-generated partner also can be effective.Related Articles On Futurity
- Penn StateYou'll feel proud if you play to win
- Iowa State UniversityWhen video gaming turns wicked ‘sick’
- Stanford UniversitySocial skills suffer when tweens multitask
“We wanted to demonstrate that something that isn’t real can still motivate people to give greater effort while exercising than if they had to do it by themselves,” says Deborah Feltz, a professor in Michigan State University’s kinesiology department who led the study with co-investigator Brian Winn, associate professor inthe College of Communication Arts and Sciences.
The implications from the research also could open the door for software and video game companies to create cyber buddy programs based on sport psychology.
“Unlike many of the current game designs out there, these results could allow developers to create exercise platforms that incorporate team or partner dynamics that are based on science,” says Feltz.Virtual buddy
Using “CyBud-X,” an exercise game specifically developed for Feltz’s research, 120 college-aged participants were given five different isometric plank exercises to do with one of three same-sex partner choices.
Along with a human partner option, two software-generated buddies were used—one representing what looked to be a nearly human partner and another that looked animated. The participant and partner image were then projected onto a screen via a web camera while exercising.
The results showed that a significant motivational gain was observed in all partner conditions.
“Even though participants paired with a human partner held their planks, on average, one minute and 20 seconds longer than those with no partner, those paired with one of the software-generated buddies still held out, on average, 33 seconds longer,” says Feltz.Part of a team
Much of Feltz’s research in this area has focused on the Köhler Motivation Effect, a phenomenon that explains why people, who may not be adept exercisers themselves, perform better with a moderately better partner or team as opposed to working out alone.
Her findings give credence that programs such as “CyBud-X” can make a difference in the way people perform.
“We know that people tend to show more effort during exercise when there are other partners involved because their performance hinges on how the entire team does,” she says. “The fact that a nonhuman partner can have a similar effect is encouraging.”
The National Institutes of Health funded the study.
Source: Michigan State University
The post To get a better workout, get a virtual exercise buddy appeared first on Futurity.
A new compound that targets an important brain receptor dramatically blocks cocaine’s reward effect and significantly blunts relapse, new research with rats shows.
Jun-Xu Li, assistant professor of pharmacology and toxicology at the University at Buffalo, says the study is one of the first to show convincingly that the drug (known as RO5263397) has the potential to treat cocaine addiction.
The findings are especially important, Li says, since despite many years of research, there are no effective medications for treating cocaine addiction.Related Articles On Futurity
- Carnegie Mellon UniversityHow to avoid brain cell 'traffic jams'
- Brown UniversityScans show early brain growth in breastfed babies
- University of PittsburghDiet low in omega-3s may make teens anxious
“Our research shows that trace amine associated receptor 1—TAAR 1—holds great promise as a novel drug target for the development of novel medications for cocaine addiction.”
TAAR 1 is a receptor in the brain that is activated by minute amounts of brain chemicals called trace amines. The compound targets TAAR 1, which is expressed in key drug reward and addiction regions of the brain.
“Because TAAR 1 anatomically and neurochemically is closely related to dopamine—one of the key molecules in the brain that contributes to cocaine addiction—and is thought to be a ‘brake’ on dopamine activity, drugs that stimulate TAAR 1 may be able to counteract cocaine addiction,” Li says.May prevent relapse
One of the ways researchers test the rewarding effects of the drug in animals is called conditioned place preference. In this type of test, the animal’s persistence in returning to, or staying at, a physical location where the drug is given indicates the drug has rewarding effects.
“When we give the rats RO5263397, they no longer perceive cocaine rewarding, suggesting that the primary effect that drives cocaine addiction in humans has been blunted,” says Li.
The results also suggest the drug made it less likely for the rats to relapse.
“Cocaine users often stay clean for some time, but may relapse when they re-experience cocaine or hang out in the old cocaine use environments. We found that RO5263397 markedly blocked the effect of cocaine or cocaine-related cues for priming relapse behavior,” Li says.
“Also, when we measured how hard the animals are willing to work to get an injection of cocaine, RO5263397 reduced the animals’ motivation to get cocaine. This compound makes rats less willing to work for cocaine, which led to decreased cocaine use.”
Other researchers from University at Buffalo, from College of Charleston, and from the Research Triangle Institute contributed to the study.
The National Institutes of Health funded the research.
Source: University at Buffalo
A new study with lung cancer patients, mostly smokers between the ages of 51 to 79 years old, sheds new light on the stigma often felt by these patients, the emotional toll it can have, and how health providers can help.
“It’s eye-opening when a patient says to you that they feel like lung cancer ‘just gets shoved under the rug,’” says Rebecca Lehto, who led the project and is an assistant professor with Michigan State University’s College of Nursing. “Patients in one of the focus groups actually associated lung cancer with a black ribbon.”Related Articles On Futurity
- University of California, DavisCancer clues in DNA magic rings trick
- University of NottinghamNicotine patches don't work during pregnancy
- Duke UniversityCell metabolism linked to deadly tumors
Previous research has shown that lung cancer carries a stigma. Because lung cancer is primarily linked to smoking behaviors, the public’s opinion of the disease can often be judgmental. Today, lung cancer remains the leading cause of cancer death globally.
Yet Lehto indicates that up to 25 percent of lung cancer patients worldwide have never smoked. The World Health Organization has identified air pollution as a cause, and genetics also have been associated with the disease.
“No matter how a patient gets lung cancer, it shouldn’t affect the care they receive or the role empathy should play,” she says.
Lehto’s goal is to raise awareness among health care providers about the additional burden stigma places on patients and develop patient care strategies that strengthen coping skills and symptom management.
“Understanding a disease from the patient’s perspective is essential to providing the best medical care to anyone,” she says.
The study evaluates feedback from four focus groups—a format Lehto suggests is uncommon in this particular area of research.
“There’ve been several studies examining lung cancer stigma, but most have relied on survey data” she says. “Most of the groups in this study had three to four people participating and relied on a group dynamic to foster discussion. The sessions actually appeared quite therapeutic…acting more like a peer group.”
Lehto’s key findings show participants expressing guilt, self-blame, anger, regret, and alienation relative to family and societal interactions. Yet, many also discussed feeling uncomfortable with their health care providers and even feared their care might be negatively affected because of their smoking background.
Although she admits more research is needed with larger, more diverse patient samples, she says her findings could help substantiate the patient perspective on a critical issue that is of sociological importance. Lehto hopes the results will encourage health care providers to examine their own perceptions about lung cancer stigma and be more aware of how it impacts the patient.
“Arming providers with rich, contextual information may help us put biases aside and heighten empathy and understanding,” she says. “That would be a step in the right direction.”
The study appears in the European Journal of Oncology Nursing. Michigan State’s College of Nursing funded the work.
Source: Michigan State University
To learn more about massive lava flows—the kind that coincide with the breakup of continents—geoscientists study the African tectonic plate. Here, the Great Rift Valley of East Africa provides a snapshot of how a continent can be torn apart.
“For decades, there’s been a big debate as to where the lavas from this massive outpouring came from,” says Tyrone Rooney, a geologist at Michigan State University. “Did they emit from deep within the Earth? Or was there some contribution from shallower sources?
“Our paper shows that some lavas came from within the African tectonic plate itself.”
The findings, published in the journal Geology, are applicable to continental breakup around the globe, says Rooney.Way bigger than Vesuvius
Many nonscientists think of big eruptions in terms of Mount St. Helens or Vesuvius. These were mere drops in a bucket compared to what Rooney and his colleagues are studying.Related Articles On Futurity
- Stanford UniversityS. Africa birthplace of modern humans
- Brown UniversityMantle heat solves the mystery of ocean ridges
- Duke UniversityFewer lions survive in ‘pockets’ of savannah
The ancient African outpouring is estimated to have poured out 350,000 cubic kilometers of lava about 30 million years ago. That’s comparable to twice the amount of water in all the world’s lakes, Rooney explains.
While much of this lava is probably derived from deep sources, Rooney’s team found that some parts of the tectonic plate also have melted to form an unusual group of lavas in Ethiopia. The researchers showed that the rocks, artifacts from the ancient outpouring, had chemical signatures of materials found in the lithosphere and were distinctly different from most of the other rocks in Ethiopia.Spectrometers and lasers
Rooney and his team were able to confirm their findings because, in part, of having access to tools their predecessors merely imagined. The new approaches are allowing them to challenge long-standing theories in their field.
For example, mass spectrometers are employed to reveal the rocks’ chemical signatures. By identifying the lavas’ elemental characteristics, the scientists can trace their origin to the surface or from deep in the mantle. Using lasers, scientists can transform rock into a fine mist and measure its composition.
In a surprise finding, the team’s lab experiments revealed that the Ethiopian samples matched rocks collected from other distant regions. The lavas in Arabia, Jordan, Egypt, and Sudan are similar, which means that some of the ingredients that supply the massive outpourings, or basalt floods, have a shallow source that is tapped as the continents split apart. Indeed the seeds of the lithosphere’s own destruction maybe contained within it, Rooney says.
“We’re interested in this because these massive outpourings happen around the same time continents break apart, create new oceans, and affect the planet and the environment on a global scale,” he adds. “So knowing the source of the lava gives us insights into a process that we still know little about.”
The National Science Foundation is supporting Rooney’s work.
Source: Michigan State University
The post African lava study adds to ‘shallow vs. deep’ debate appeared first on Futurity.
Women who stop eating fish shortly before or during their pregnancy may only lower their child’s exposure to contaminants known as POPs by 10 to 15 percent.
The estimate is based on a new modeling study that suggests fish consumption advisories for expecting mothers are ineffective in reducing infant exposure to long-lived contaminants like persistent organic pollutants (POPs).
The study looks at how different levels of environmental contamination, a mother’s compliance with advisories, and the behavior of chemicals in the body influenced exposure in her children.
“We have to be careful in saying fish advisories don’t work at all because they can work very well for reducing exposure to quickly eliminated contaminants, such as mercury,” says Matt Binnington, a University of Toronto Scarborough PhD student. “But for POPs we found that they are not very effective.”Long-lasting pollutants
POPs are compounds that take a long time to break down and as a result can persist in the environment and begin to accumulate in humans by way of the food chain. While many POPs such as DDT and PCBs have long been banned from production, they still exist in the environment.Related Articles On Futurity
- Johns Hopkins UniversityBlood test could predict postpartum depression
- Emory UniversityEpilepsy drug lingers despite birth defect risk
- University of QueenslandMango skin may help fight flab
Fish advisories have been developed for these chemicals because they are easily passed from mothers to their children during pregnancy and nursing, potentially affecting healthy infant neurodevelopment.
Binnington says consumption advisories for many POPs are ineffective because they can remain in the body for years or even decades due to properties that make it difficult for the human body to eliminate them. The same is not true for mercury-based advisories, as the time it remains in the body is much shorter compared to POPs.
“Something like mercury stays in the body for only a few months and by temporarily adjusting your diet you can reduce exposure,” says Binnington.
The limitation with consumption advisories is that while they inform people what not to eat, they do not offer much in the way of healthy alternatives, says Professor Frank Wania. In fact, substituting fish with meat such as beef may even end up doing more harm.
“Substituting fish with beef may actually result in higher exposure to other contaminants,” he says, adding there is also a loss of nutritional benefits by not eating fish.
The research, which received funding through NSERC and the Northern Contaminants Program of the Canadian Department for Aboriginal Affairs and Northern Development, is published in the journal Environmental Health Perspectives.
Source: University of Toronto
The post No fish during pregnancy doesn’t cover all contaminants appeared first on Futurity.
Physicists are closer to making a quantum computer a reality by demonstrating a new level of reliability in a five-qubit array.
A fully functional quantum computer is one of the holy grails of physics. Unlike conventional computers, the quantum version uses qubits (quantum bits), which make direct use of the multiple states of quantum phenomena. When realized, a quantum computer will be millions of times more powerful at certain computations than today’s supercomputers.
Quantum computing relies on aspects of quantum mechanics such as superposition. This notion holds that any physical object, such as an atom or electron—what quantum computers use to store information—can exist in all of its theoretical states simultaneously. This could take parallel computing to new heights.Related Articles On Futurity
- University of Southern CaliforniaTests verify quantum effects in processor
- Princeton UniversityOne step closer to quantum computing
- University of Southern California'Votes' protect quantum processors from decoherence
“Quantum hardware is very, very unreliable compared to classical hardware,” says Austin Fowler, a staff scientist in the physics department at the University of California, Santa Barbara, whose theoretical work inspired the experiments. “Even the best state-of-the-art hardware is unreliable. Our paper shows that for the first time reliability has been reached.”
While Fowler and colleagues have shown logic operations at the threshold, the array must operate below the threshold to provide an acceptable margin of error.
“Qubits are faulty, so error correction is necessary,” says graduate student and co-lead author Julian Kelly, who worked on the five-qubit array.
“We need to improve and we would like to scale up to larger systems,” says lead author Rami Barends, a postdoctoral fellow with the group. “The intrinsic physics of control and coupling won’t have to change but the engineering around it is going to be a big challenge.”Five Xmons in a single row
The unique configuration of the group’s array results from the flexibility of geometry at the superconductive level, which allowed the scientists to create cross-shaped qubits they named Xmons.
Superconductivity results when certain materials are cooled to a critical level that removes electrical resistance and eliminates magnetic fields. The team chose to place five Xmons in a single row, with each qubit talking to its nearest neighbor, a simple but effective arrangement. The team reports its results in the journal Nature.
“Motivated by theoretical work, we started really thinking seriously about what we had to do to move forward,” says John Martinis, a physics professor. “It took us a while to figure out how simple it was, and simple, in the end, was really the best.”
“If you want to build a quantum computer, you need a two-dimensional array of such qubits, and the error rate should be below 1 percent,” explains Fowler. “If we can get one order of magnitude lower—in the area of 10-3 or 1 in 1,000 for all our gates—our qubits could become commercially viable.
“But there are more issues that need to be solved. There are more frequencies to worry about and it’s certainly true that it’s more complex. However, the physics is no different.”
According to Martinis, it was Fowler’s surface code that pointed the way, providing an architecture to put the qubits together in a certain way.
“All of a sudden, we knew exactly what it was we wanted to build because of the surface code,” Martinis says. “It took a lot of hard work to figure out how to piece the qubits together and control them properly. The amazing thing is that all of our hopes of how well it would work came true.”
Source: UC Santa Barbara
The post Team builds ‘reliable’ array for quantum computing appeared first on Futurity.
Better-educated people appear more likely to fully recover from traumatic brain injury, suggesting that “cognitive reserve” may help their brains to mend, research shows.
Scientists found that those with at least a college education are seven times more likely than those who didn’t finish high school to be disability-free one year after moderate or severe traumatic brain injury (TBI).Related Articles On Futurity
- Indiana UniversityKids figure out big numbers as early as age 3
- Washington University in St. LouisPatient people mull over future rewards
- Brown UniversityHow mammal brains know ‘where it’s at’
The findings, reported online by the journal Neurology, are new among TBI investigators. They mirror, however, those from Alzheimer’s disease research, in which higher educational attainment—believed to be an indicator of a more active, or more effective, use of the brain’s “muscles” and therefore of more cognitive reserve—has been linked to slower progression of dementia.
“After this type of brain injury, some patients experience lifelong disability, while others with very similar damage achieve a full recovery,” says study leader Eric B. Schneider, an epidemiologist at the Johns Hopkins University School of Medicine.
“What we learned [in the study] may point to the potential value of continuing to educate yourself and engage in cognitively intensive activities,” Schneider says. “Just as we try to keep our bodies strong in order to help us recover when we are ill, we need to keep the brain in the best shape it can be.”
Schneider describes cognitive reserve as “the brain’s ability to be resilient in the face of insult or injury.” He says researchers don’t currently understand the biological mechanisms that might account for the link between years of schooling and improved recovery.
“People with increased cognitive reserve capabilities may actually heal in a different way that allows them to return to their pre-injury function and/or they may be able to better adapt and form new pathways in their brains to compensate for the injury,” Schneider says. “Further studies are needed to not only find out, but also to use that knowledge to help people with less cognitive reserve.”Full recovery
Schneider conducted the research with Robert D. Stevens, a neuro-intensive care physician with Johns Hopkins’ department of anesthesiology and critical care medicine. They studied 769 patients who had been hospitalized with a moderate to severe TBI and subsequently admitted to a rehabilitation facility.
Of those patients, 219—or 27.8 percent—were free of any detectable disability one year after their injury. Fewer than 10 percent of those who had not completed high school recovered. On the other hand, nearly 31 percent of those with between 12 and 15 years of schooling and nearly 40 percent of patients with 16 or more years of education fully recovered.
When adjusted for other factors, such as race, injury severity, and length of stay in rehabilitation, the advantage for the most highly educated patients is seven-fold, Schneider says.
“Understanding the underpinnings of cognitive reserve in terms of brain biology could generate ideas on how to enhance recovery from brain injury,” Stevens says.
Stevens has received funding from the Defense Advanced Research Projects Agency, the Department of Defense, and the Johns Hopkins Brain Sciences Institute.
Source: Johns Hopkins University
Cougars may have survived the mass extinction that took place about 12,000 years ago because they were not particular about what they ate, unlike their more finicky cousins–the saber-tooth cat and the American lion.
Both animals perished along with the woolly mammoth and many of the other supersized mammals that walked the Earth during the late Pleistocene.Related Articles On Futurity
- Indiana UniversityOdd couple: Algae in salamander cells
- Stony Brook UniversityWhy Amazon is hopping with treefrogs
- Stanford UniversityMass extinction made algae oxygen hogs
“Before the Late Pleistocene extinction, six species of large cats roamed the plains and forests of North America. Only two—the cougar and jaguar—survived,” says Larisa R.G. DeSantis, assistant professor of earth and environmental sciences at Vanderbilt University. “The goal of our study was to examine the possibility that dietary factors can explain the cougar’s survival.”
For their investigation, DeSantis and coauthor Ryan Haupt of the University of Wyoming employed a new technique called dental microwear texture analysis that uses a confocal microscope to produce a 3D image of the surface of a tooth.
The image is then analyzed for microscopic wear patterns. The analysis of the teeth of modern carnivores, including hyenas, cheetahs, and lions has established that the meals an animal consumes during the last few weeks of its life leave telltale marks. Chowing down on red meat, for example, produces small parallel scratches while chomping on bones adds larger, deeper pits.
For the study, published in Biology Letters, the researchers analyzed the teeth of 50 fossil and modern cougars, and compared them with the teeth of saber-tooth cats and American lions excavated from the La Brea Tar Pits in Los Angeles and the teeth of modern African carnivores including cheetahs, lions, and hyenas.La Brea cougars
Previously, DeSantis and others found that the dental wear patterns of the extinct American lions closely resembled those of modern cheetahs, which are extremely finicky eaters that mostly consume tender meat and rarely gnaw on bones. Saber-tooth cats were instead similar to African lions and chewed on both flesh and bone.
Among the La Brea cougars, the researchers found significantly greater variation between individuals than they did in the other large cats, including saber-toothed cats. Some of the cougars show wear patterns similar to those of the finicky eaters but on others they found wear patterns closer to those of modern hyenas, which consume almost the entire body of their prey, bones included.
“This suggests that the Pleistocene cougars had a ‘a more generalized’ dietary behavior,” DeSantis says. “Specifically, they likely killed and often fully consumed their prey, more so than the large cats that went extinct.”
This is consistent with the dietary behavior and dental wear patterns of modern cougars, which are opportunistic predators and scavengers of abandoned carrion and fully consume the carcasses of small and medium-sized prey, a “variable dietary behavior that may have actually been a key to their survival.”
The National Science Foundation supported the research.
Source: Vanderbilt University
A study of subarctic forest moths in Finnish Lapland suggests that scientists may be underestimating the effects of climate change on animals and plants because much of the harm is hidden from view.
The 32-year analysis that looked at populations of 80 different moth species shows 90 percent of them were either stable or increasing from 1978 to 2009. During that time, average annual temperatures at the study site rose 3.5 degrees Fahrenheit, and winter precipitation increased as well.Related Articles On Futurity
- University of Colorado at BoulderGlaciers shed billions of tons, satellites show
- University of California, DavisSmall investors see red over cap and trade
- University of WashingtonTrack climate change with tennis balls
“You see it getting warmer, you see it getting wetter, and you see that the moth populations are either staying the same or going up. So you might think, ‘Great. The moths like this warmer, wetter climate.’ But that’s not what’s happening,” says ecologist Mark Hunter, professor of ecology and evolutionary biology.
Hunter used advanced statistical techniques to examine the roles of different ecological forces affecting the moth populations and found that warmer temperatures and increased precipitation reduced the rates of population growth.
“Every time the weather was particularly warm or particularly wet, it had a negative impact on the rates at which the populations grew,” he says. “Yet, overall, most of these moth populations are either stable or increasing, so the only possibility is that something else other than climate change—some other factor that we did not measure—is buffering the moths from substantial population reductions and masking the negative effects of climate change.”
Published in Global Change Biology, the findings have implications that reach beyond moths in Lapland, Hunter says.
If unknown ecological forces are helping to counteract the harmful effects of climate change on these moths, it’s conceivable that a similar masking of impacts is happening elsewhere. “We could be underestimating the number of species for which climate change has negative impacts because those effects are masked by other forces.”388,779 moths
The study was conducted at the Värriö Strict Nature Reserve, 155 miles north of the Arctic Circle and less than four miles from the Finnish-Russian border. The nearest major road is more than 60 miles away.
Between 1978 and 2009, Finnish scientists used light traps at night to catch 388,779 moths from 456 species. Eighty of the most abundant species were then analyzed. A statistical technique called time series analysis examined how various ecological forces, including climate, affected per capita population growth.
Scientists want to know how climate change will affect insects because the six-legged creatures play key roles as agricultural pests, pollinators, food sources for vertebrates, vectors of human disease, and drivers of various ecosystem processes.
Butterflies and moths may be particularly susceptible to population fluctuations in response to climate change—especially at high latitudes and high elevations.Good news or not?
Most recent studies of moth abundance have shown population declines. So Hunter and his colleagues were surprised to find that 90 percent of the moth species in the Lapland study were either stable or increasing.
On one level, the results can be viewed as a good news climate story: in the face of a rapid environmental change, these moths appear to be thriving, suggesting that they are more resilient than scientists had expected.
But the other side of that coin is that unknown ecological forces appear to be buffering the harmful effects of climate change and hiding those effects from view.
“The big unknown is how long this buffering effect will last,” Hunter says. “Will it keep going indefinitely, or will the negative effects of climate change eventually just override these buffers, causing the moth populations to collapse?”
Another big unknown: what ecological forces are currently buffering the Lapland moths from the negative effects of a warming climate?
Finnish team members who’ve been collecting moths at the Värriö reserve for decades say they have noticed a gradual increase in tree and shrub density, increased rates of tree growth, and a rise in the altitude of the tree line.
Trees provide food and shelter for moths, and leaf litter offers overwintering sites and resting areas away from predators. Perhaps the observed vegetation changes are helping to offset the negative effects of warmer temperatures and increased precipitation. That possibility was not analyzed in the current study.
A Strategic Research Grant from the University of Turku and the Nordic Centre of Excellence Tundra, the National Science Foundation, the Academy of Finland Center of Excellence, and the Nordic Center of Excellence CRAICC supported the work.
Source: University of Michigan
The post Moths raise concerns about hidden harm of climate change appeared first on Futurity.
Monitoring the populations of “uncontacted” tribes via Google Earth satellite images may be a noninvasive way help insure the survival of indigenous Brazilian villages.
Lowland South America, including the Amazon Basin, harbors most of the last indigenous societies that have limited contact with the outside world. Studying these tribes, located deep within Amazonian rainforests, gives scientists a glimpse at what tribal cultures may have been like before the arrival of Europeans.Related Articles On Futurity
- California Institute of TechnologyAt both poles, ice melt is speeding up
- Tulane UniversityAmazon megastorm felled half a billion trees
- University of MinnesotaMapping the bottom of the world
Now, researchers use satellite images to assess the demographic health of one particular village of isolated people on the border between Brazil and Peru. Remote surveillance is the only method to safely track uncontacted indigenous societies and may offer information that can improve their chances for long-term survival.
Rob Walker, an assistant professor of anthropology in the College of Arts and Science at the University of Missouri collaborated with Marcus Hamilton, postdoctoral fellow at the Santa Fe Institute and adjunct assistant professor of anthropology at the University of New Mexico. The study is published online in The American Journal of Human Biology.Estimated populations
They used Google Earth satellite imagery to estimate the area of the fields and the size of the village belonging to the tribe, as well as the living area of the tribe’s temporary housing, and compared that with similar estimates for 71 other Brazilian indigenous communities.
“We found that the estimated population of the village is no more than 40 people,” Walker says. “A small, isolated village like this one faces an imminent threat of extinction. However, forced contact from the outside world is ill advised, so a noninvasive means of monitoring the tribe is recommended. A remote surveillance program using satellite images taken periodically of this group would help track the movements and demographic health of the population without disrupting their lives.”Creating boundaries
Using information captured from remote surveillance, scientists can help shape policies that mitigate the threats of extinction including deforestation, illegal mining, and colonization in these remote areas.
Additionally, surveillance also can help locate isolated villages, track patterns of migration over time, and inform and create boundaries or buffer zones that would allow tribes to stay isolated, Walker says.
“Close to 100 uncontacted groups are thought to currently exist in Amazonia,” Walker says.
“Deforestation, cattle ranching, illegal mining, and outside colonization threaten their existence. Most of these tribes are swidden horticulturalists and so their slash-and-burn fields are observable in satellite images. But, they do move around, sometimes in response to external threats, and this movement requires constant monitoring if there is to be any hope of preserving their habitat and culture.”
A National Geographic Society Research and Exploration grant supported the study.
Source:University of Missouri
The post How Google Earth images could help save Amazonian tribes appeared first on Futurity.
Binge drinking by teenagers and young adults is strongly associated with liking, owning, and correctly identifying music that references alcohol by brand name.
Based on a national, randomized survey of more than 2,500 people ages 15 to 23, the findings suggest that policy and educational interventions designed to limit the influence of alcohol-brand references in popular music could be important in reducing alcohol consumption in teens and young adults.Related Articles On Futurity
- Vanderbilt UniversityLasting impact from alcohol exposure in utero
- University of TorontoRaise dropout age to keep teens in school
- Yale UniversitySimple test may predict alcoholism
“Every year, the average adolescent is exposed to about 3,000 references to alcohol brands while listening to music,” says lead author Brian A. Primack, associate professor of medicine and pediatrics and director of the Program for Research on Media and Health in the University of Pittsburgh’s School of Medicine.
“It is important that we understand the impact of these references in an age group that can be negatively affected by alcohol consumption.”
Alcohol is considered the third-leading, lifestyle-related cause of death in the United States, according to the Centers for Disease Control and Prevention.
“Brand references may serve as advertising, even if they are not paid for by the industry,” says senior author James D. Sargent, co-director of the Cancer Control Research Program at Norris Cotton Cancer Center in New Hampshire and professor of pediatrics in the Geisel School of Medicine at Dartmouth College. “This is why it is useful to examine the influence of brand mentions.”Drinking survey
Of the 2,541 participants who completed the survey, 1,488, or 59 percent, reported having had a complete alcoholic drink, defined as 12 ounces of beer, 5 ounces of wine, or 1.5 ounces of hard liquor at one time. Of those, 18 percent reported binging—or drinking heavily over a short period of time—at least monthly, and 37 percent reported having had problems, such as injuries, due to alcohol.
In the survey, which could be completed either online or on paper, participants were given the titles of popular songs that include alcohol mentions and asked if they liked or owned the song. They also were tested to determine if they could spontaneously recall what brand of alcohol was mentioned in the lyrics.Odds double
Survey participants who could correctly recall alcohol brands in songs had more than twice the odds of having had a complete alcoholic drink, compared to those who could not recall the brand, even after adjusting for factors including age, socioeconomic status, and alcohol use by friends or parents. The participants who could identify the alcohol brands in songs also had greater odds of having ever binged on alcohol.
“A surprising result of our analysis was that the association between recalling alcohol brands in popular music and alcohol drinking in adolescents was as strong as the influence of parental and peer drinking and an adolescent’s tendency toward sensation-seeking,” says Primack. “This may illustrate the value that this age group places on the perceived opinions and actions of music stars.”
Primack says that one possible solution could be to empower adolescents with critical thinking skills. “Media literacy is a growing educational methodology that may be successful in helping young people make healthier decisions,” he says. “In the case of alcohol, it may be valuable to help them understand how alcohol-brand references in music may manipulate their thoughts and emotions to sell them a product.”
The study is published online in the journal Alcoholism: Clinical & Experimental Research.
Source: University of Pittsburgh
The post Alcohol brands in songs may encourage teens to binge appeared first on Futurity.
Chemists have created nanoparticles that can sample crude oil and natural gas for hydrogen sulfide before pumping.
Crude oil and natural gas inherently contain hydrogen sulfide, which gives off a “rotten egg” smell. Even a 1 percent trace of sulfur turns oil into what’s known as “sour crude,” which is toxic and corrodes pipelines and transportation vessels, says James Tour, a chemist at Rice University.
The extra steps required to turn the sour into “sweet” crude are costly.
“So it’s important to know the content of what you’re pumping out of the ground, and the earlier the better,” Tour adds.
Limited exposure to hydrogen sulfide causes sore throats, shortness of breath, and dizziness, according to the researchers. The human nose quickly becomes desensitized to hydrogen sulfide, leading to an inability to detect higher concentrations. That can be fatal, researchers say.Check before you pump
Tour and colleagues have been developing techniques to use nanoreporters—which are based on nanometer-sized carbon material—to gather intel from oil fields. When the newly modified nanoreporters are exposed to hydrogen sulfide, the particles’ fluorescent properties immediately change.Related Articles On Futurity
- McGill UniversityGold wires get 'brittle-like' at nanoscale
- University of PennsylvaniaIn all-optical switch, nanowires manipulate light
- University of WashingtonNano-fabric defends against HIV and sperm
When pumped out of a production well, the particles can be analyzed with a spectrometer to determine the level of contamination.
“This paper is a big step because we’re making our nanoreporters detect something that’s not oil,” says Michael Wong, suggesting the possibility that nanoparticles may someday be able to capture sulfur compounds before they can be pumped to the surface.
“Even if that’s not cost-effective, just having information about the sulfur content may be enough to tell a company, ‘Let’s cap this well and move on to a cleaner site,’” adds Wong, a chemistry professor and one of the coauthors of the paper published in the journal ACS Applied Materials and Interfaces.Stable at high temperatures
Modifying the particles with common polyvinyl alcohol (PVA) was the key to making the nanoreporters stable in temperatures as high as 100 degrees Celsius (212 degrees Fahrenheit). Testing in beds of sandstone or with actual Kuwaiti dolomite, to mimic oilfield environments, helped the team perfect the size and formula for nanoreporters that are most likely to survive a trip through the depths and return with data.
“We found the longer the PVA polymer chains, the more stable the nanoparticles were in the high temperatures they’re subjected to,” says Rice graduate student Chih-Chau Hwang, co-lead author of the paper with fellow graduate student Gedeng Ruan.
“The method of detection is so sensitive that large amounts of nanoreporters need not be pumped downhole,” Tour adds. “This is enormously important for workers in the field to know for aspects of safety, lifetime of equipment, and value of the afforded oil.”
The Advanced Energy Consortium supported the research.
Source: Rice University
The post Nanoreporters go underground to check oil for sulfur appeared first on Futurity.
Tickling a baby’s toes may be cute, but it’s also possible that those touches could help babies learn the words in their language.
New research shows that a caregiver’s touch could help babies to find words in the continuous stream of speech.
“We found that infants treat touches as if they are related to what they hear and thus these touches could have an impact on their word learning,” says Amanda Seidl, an associate professor of speech, language, and hearing sciences at Purdue University who studies language acquisition.Related Articles On Futurity
- University of ChicagoWhy we distrust foreign accents
- Johns Hopkins University'Number sense' lets kids do basic algebra
- University of ArizonaEarly bilinguals pick up two 'sound systems'
“We think of touch as conveying affection, but our recent research shows that infants can relate touches to their incoming speech signal. Others have looked at the role of touch with respect to babies forming an attachment and physical development. But until now the impact of touch on language learning has not been explored.”
The findings appear in Developmental Science. Seidl is interested in the multitude of cues or sources of information that babies may combine to learn their language. Learning words presents a challenge for infants since most of the words they hear are presented in a continuous stream of speech, rather than in isolation, by their caregivers.
“Parents may pause before saying an infant’s name, but they almost never do so for other words. This research explored whether touches could help infants to find where words begin and end in the continuous stream of speech. They need to find words before they can attach real meaning to their words,” Seidl says.
“Because names of body parts are often the first words that babies learn and touching is often involved when caregivers talk about body parts, we speculated that touch could act as a cue to word edges.”Touch cues
A total of 48 English-learning 4-month-olds were tested at Purdue’s Infant Speech Lab in two groups as they sat on a parent’s lap facing an experimenter while listening to a pre-recorded continuous stream of speech of nonsense words.
In the first experiment, every time a nonsense word, such as “dobita,” was spoken, the experimenter touched the baby’s knee. This occurred two dozen times. Also, the word “lepoga” was played 24 times, but the infant was only touched once on her elbow during the playing of this word. The other 23 touches to the elbow occurred on other syllable sequences.
Following this listening, the babies participated in a language preference study, and almost all showed that they had pulled “dobita” out of the continuous stream of speech. This was the word that was reinforced by aligned touching.
In the second experiment, the same format of continuous speech and new words was played, but the experimenter touched his or her eyebrow or chin instead of the baby. The children in this experiment did not show that they had pulled out any words.
“It didn’t matter how much time the infant spent looking at the experimenter’s face, the babies were not able to use these cues in the same way as they were when their own body was touched,” says Seidl, who is now looking at individual differences in how parents speak and touch their baby.Language predictions?
“I am interested in whether we can predict babies’ language later on from early measures of speech perception,” Seidl says. “If we look at speech perception and learning in a 6-month-old can we predict their language ability at 3 years? If we can find out what kinds of learners young children are, we could target their learning environment to their learning style.”
Also part of the research team are Ruth Tincoff, an assistant professor at Bucknell University, and former Purdue undergraduate student Christopher Baker and former Purdue graduate student Alejandrina Cristia.
The National Science Foundation supported the research.
Source: Purdue University
Cellular processes are not perfect. Sometimes, the by-products of their mistakes are harmless. Other times, they can lead to disease or even death.
With Alzheimer’s disease, the mistake occurs when a protein called amyloid precursor protein (APP) in a neuron’s membrane gets cut in the wrong place, leading to a buildup of abnormal fragments called amyloid-beta. These fragments clump together to form a plaque around neurons, eventually interfering with brain function.Related Articles On Futurity
- University of Arizona‘Missing link’ fossil preserves oldest brain
- Brown UniversityClearest view yet of complex tied to cancer
- University of PennsylvaniaClearing away insomnia's mental fog
But the cell has systems to deal with mistakes. A protein complex called retromer acts like a cellular garbage truck, collecting faulty gene products and trafficking them to be destroyed or recycled.
For years, Alzheimer’s research has focused on preventing the formation of amyloid-beta with little success. But instead of trying to stop mistakes, what if researchers improved the system for dealing with them?
A team of researchers has devised a new approach to the treatment of Alzheimer’s disease that significantly increases retromer levels while decreasing amyloid-beta levels in neurons, without harming the cell.
The study appears in the online edition of Nature Chemical Biology.
Dagmar Ringe, professor in the departments of biochemistry and chemistry at Brandeis University and Gregory Petsko, professor emeritus of biochemistry and chemistry, led the research team.Like reinforcing a garbage truck
Previous research showed that brains affected by Alzheimer’s had lower levels of retromer and by increasing retromer in neurons, amyloid-beta levels decreased. However, this is the first time researchers have found a way to pharmacologically strengthen the retromer complex.
The scientists identified compounds called pharmacological chaperones that bind to retromer’s weakest points, making the complex stronger, more resilient, and better able to move amyloid-beta.
When the chaperone, named R55, was added to neurons in cell culture, it bound to and stabilized retromer—like reinforcing a garbage truck with stronger parts. Researchers saw an almost immediate increase in retromer levels and a decrease in amyloid-beta levels. The researchers are currently testing the clinical effects of R55 in mice.
Ringe cautions that this research is not a cure for Alzheimer’s but a potential treatment.
“This research cannot stop the progress of Alzheimer’s once it is diagnosed,” she says. “But what it can do is ameliorate it and slow down that progress, which is a very good start.”
Researchers from Columbia University Medical Center and Weill Cornell Medical College also contributed to the study.
Source: Brandeis University
Astronomers have discovered the first “self-lensing” binary star system. Like so many interesting discoveries, they say this one happened largely by accident.
Astronomers detect planets too far away for direct observation by the dimming in light when a world passes in front of, or transits, its host star. Researchers were looking for transits others might have missed in data from the planet-hunting Kepler Space Telescope when they saw something in the binary star system KOI-3278 that didn’t make sense.
“I found what essentially looked like an upside-down planet,” says graduate student Ethan Kruse, who works with Eric Agol, associate professor of astronomy at University of Washington. “What you normally expect is this dip in brightness, but what you see in this system is basically the exact opposite—it looks like an anti-transit.”
As reported in the journal Science, the two stars of KOI-3278, about 2,600 light-years (a light-year is 5.88 trillion miles) away in the Lyra constellation, take turns being nearer to Earth as they orbit each other every 88.18 days. They are about 43 million miles apart, roughly the distance the planet Mercury is from the sun. The white dwarf, a cooling star thought to be in the final stage of life, is about Earth’s size but 200,000 times more massive.Like a magnifying glass
The increase in light, rather than the dip Kruse thought he’d see, is the white dwarf bending and magnifying light from its more distant neighbor through gravitational lensing, like a magnifying glass. The mass of the closer star can be measured by how powerfully it magnifies light from its more distant companion star.Related Articles On Futurity
- Johns Hopkins UniversityHear a heartbeat in space with this stethoscope
- University of Texas at AustinBlack hole with mass of 6.6 billion suns
- University of ChicagoTitanium ‘DNA’ reveals moon’s one parent
“The basic idea is fairly simple,” Agol says. “Gravity warps space and time and as light travels toward us it actually gets bent, changes direction. So, any gravitational object—anything with mass—acts as a magnifying glass,” though a weak one. “You really need large distances for it to be effective.”
“The cool thing, in this case, is that the lensing effect is so strong, we are able to use that to measure the mass of the closer, white dwarf star. And instead of getting a dip now you get a brightening through the gravitational magnification.”
This finding improves on research in 2013 by California Institute of Technology, which detected a similar self-lensing effect minus the brightening of the light because the two stars being studied were much closer together.
“The effect in this system is much stronger,” Agol says. “The larger the distance, the more the effect.”Milky Way microlensing
Gravitational lensing is a common tool in astronomy. It has been used to detect planets around distant stars within the Milky Way galaxy, and was among the first methods used to confirm Albert Einstein’s general theory of relativity. Lensing within the Milky Way galaxy, such as this, is called microlensing.
But until now, the process had only been used in the fleeting instances of a nearby and distant star, not otherwise associated in any way, aligning just right, before going their separate ways again.
“The chance is really improbable,” says Agol. “As those two stars go through the galaxy they’ll never come back again, so you see that microlensing effect once and it never repeats. In this case, though, because the stars are orbiting each other, it repeats every 88 days.”
White dwarfs are important to astronomy, and are used as indicators of age in the galaxy, astronomers say. Basically embers of burned-out stars, white dwarfs cool off at a specific rate over time. With this lensing, astronomers can learn with much greater precision what its mass and temperature are, and follow-up observations may yield its size.
By expanding their understanding of white dwarfs, astronomers take a step closer to learning about the age of the galaxy.
Agol and Kruse have sought time to use the Hubble Space Telescope to study KOI-3278 in more detail, and to see if there are other such star systems waiting to be discovered in the Kepler data.
“If everyone’s missed this one, then there could be many more that everyone’s missed as well,” Kruse says.
Source: University of Washington
The post ‘Upside-down planet’ turns out to be self-lensing stars appeared first on Futurity.
Two new tools are letting scientists see brain activity as it happens live.
The probes involve proteins that light up as an electric current sweeps down the long tendrils that link nerves together. The scientists can insert these proteins into a specific group of brain cells that they want to study—say, cells in the part of the brain involved in memory, or cells that specifically inhibit other neurons from firing—and then watch those cells as they communicate.
“You want to know which neurons are firing, how they link together, and how they represent information,” says Michael Lin, assistant professor of pediatrics and of bioengineering at Stanford University. “A good probe to do that has been on the wish list for decades.”
Lin and Mark Schnitzer, associate professor of biology and of applied physics at Stanford, developed two different approaches to allow neuroscientists to read brain activity more quickly and sensitively. Their research papers on this topic were published in Nature Neuroscience (Lin’s study) and Nature Communications (Schnitzer’s study).Develop drugs, study disease
With these tools scientists can study how we learn, remember, navigate, or any other activity that requires networks of nerves working together. The tools can also help scientists understand what happens when those processes don’t work properly, as in Alzheimer’s or Parkinson’s diseases, or other disorders of the brain.
The proteins could also be inserted in neurons in a lab dish. Scientists developing drugs, for example, could expose human nerves in a dish to a drug and watch in real time to see if the drug changes the way the nerve fires. If those neurons in the dish represent a disease, like Parkinson’s disease, a scientist could look for drugs that cause those cells to fire more normally.
For more than a decade, neuroscientists have watched a proxy of nerves firing. Each time a nerve sends a signal, calcium floods into the cell and is then pumped back out in anticipation of the next signal.Watching the shadows
In fact, Schnitzer developed a miniature camera that he has been using to peer into the brains of mice to record these calcium waves. His lab has focused on studying the region of the brain involved in learning and memory.Related Articles On Futurity
- Stanford UniversityHydrogel turns mouse brain transparent
- Stanford UniversityPaving the way from nose to noggin
- University of California, BerkeleyTo find what's lost, brain forms 'search party'
But what Schnitzer sees through his tiny camera isn’t the actual nerve activity. He has been watching the shadows, and like any shadows they are a good proxy—but their shapes aren’t always realistic.
Calcium stays in the neuron long after a signal has swept past, and may mask a second signal as it flashes by. Also, sometimes an electrical signal won’t trigger enough calcium to enter a cell for the protein to light up.
“Sensing calcium is insufficient for a full understanding of what’s happening,” Schnitzer says. “There are also many neuronal cell types that are not well studied with calcium probes.”
Frustrated with the state of effective tools for watching nerves fire, Lin and Schnitzer applied for and received a seed grant from Stanford Bio-X to develop one. These grants support high-risk projects that bring together engineering and biology know-how to solve problems in the field.The first approach
Although the two labs had the same goal and ended up developing probes with similar qualities, they took very different approaches.
Lin’s lab focuses on engineering proteins that can be used as tools to study aspects of how the cell functions. Lin and a postdoctoral fellow in his lab, Francois St-Pierre, had an idea for generating a protein that would light up in response to a change in voltage, such as what happens when a nerve sends a signal.
Other scientists were working on the same problem, but they were not able to create a protein that responded quickly and strongly to a change in voltage. By looking at the structure of different voltage-sensing proteins, St-Pierre thought he could generate a better signal by putting the fluorescent element in the middle of a voltage-sensing protein.
Despite some concerns that a big fluorescent element in the middle of the protein might disrupt its function, the combination worked. He and Lin named their probe ASAP—an acronym for a scientific description of the protein as well as a description of the protein’s speedy light. St-Pierre was first author on the Nature Neuroscience paper.A second approach
Like St-Pierre, postdoctoral scholar Yiyang Gong in Schnitzer’s lab recognized the need for a voltage sensing protein, but he took inspiration from a different approach. He had read about work by scientists attempting to detect voltage starting with bacterial proteins called rhodopsins—but without much success.
Gong made significant modifications to that approach and, like St-Pierre, ended up with a protein that will embed in the nerve cell membrane and produce light when the nerve fires. Gong was first author on the Nature Communications paper.
“The two probes actually have similar performance, which is a coincidence because we arrived at them from very different directions,” Lin says.Tested in living mice
Both groups show that their proteins work in neurons in a lab dish. Gong also inserted his protein in a group of neurons (called Purkinje neurons) in living mice and was able to record the protein’s flashing light as those nerves sent signals.
He was able to see those nerves fire through a tiny glass window into the mouse brain, but the scientists say they could use a camera like the one Schnitzer developed to observe deeper parts of the brain.
The scientists say they view their probes as a starting point. They expect to continue refining the proteins to have properties that are optimized for different cell types or to produce different colors of light.
“I think there will be exciting applications enabled by what we have developed,” Schnitzer says.
Source: Stanford University
The post Team invents probes to watch neurons fire in real time appeared first on Futurity.
The risk of pregnancy among women using a newer method of planned sterilization called hysteroscopic sterilization is more than 10 times greater over a 10-year period than using the more commonly performed laparoscopic sterilization, a study shows.
Published online today in the medical journal Contraception, the study found the higher risk of pregnancy with a newer sterilization method marketed under the brand name Essure.
“This study provides essential information for women and their doctors discussing permanent sterilization,” says lead study author Aileen Gariepy, assistant professor in the department of obstetrics, gynecology, and reproductive sciences at the Yale School of Medicine.Related Articles On Futurity
- University of MissouriBlame stress, not chemo brain, for declines
- University of QueenslandPredict if menopause will be mild or severe?
- University of North Carolina at Chapel HillWhy 'breast is best' for women, too
Female surgical sterilization is the most popular method of pregnancy prevention worldwide and the most commonly used method of contraception among women age 35 and older in the United States.
Each year, 345,000 US women undergo sterilization procedures, and a total of 10.3 million US women rely on female sterilization for pregnancy prevention.
Hysteroscopic sterilization is a multi-step process that requires women to have a procedure to place coils inside the opening of the Fallopian tubes, use another method of contraception for three months after the procedure, and then have a special X-ray test in which dye is pushed into the uterus to confirm whether the tubes are blocked.
“When Essure was first approved by the Food and Drug Administration in 2002, data presented to physicians and patients only included those women who successfully completed all of the steps to be sterilized using the procedure,” says study coauthor Mitchell Creinin, professor and chair of the department of obstetrics and gynecology at the UC Davis School of Medicine.
“However, physicians quickly realized that at least 1 in 10 women would not be able to have the coils placed and that many would not return for follow-up testing,” he says.57 per 1,000 women
The study uses data in the published literature to model what happens to women who start down a path of wanting a laparoscopic sterilization or hysteroscopic sterilization, including those who do not successfully have the procedure. The computer model, called a decision analysis, calculates what could occur in a theoretical group of 100,000 women taking into account all of the potential options that could happen in each step of the process.
The authors found that pregnancy risk after hysteroscopic sterilization is primarily accrued in the first year after initiating the process because hysteroscopic sterilization is not immediately effective. Conversely, laparoscopic sterilization is immediately effective.
The major findings by Gariepy and colleagues include that pregnancy rates in the first year for women planning hysteroscopic sterilization are 57 per 1,000 women, compared with about 3 to 7 per 1,000 women for laparoscopic sterilization.
The total pregnancy rate over 10 years reached 96 per 1,000 women for hysteroscopic sterilization compared to only 24 to 30 per 1,000 women with a laparoscopic procedure. The authors accounted for other methods of contraception that would be used for women who did not have a sterilization procedure, including that some women who have a failed hysteroscopic procedure would choose a laparoscopic procedure.Risks and benefits
Since its introduction, hysteroscopic sterilization has been performed on more than 650,000 women worldwide. This newer procedure can be performed in a doctor’s office and does not involve abdominal incision or general anesthesia.
Many doctors and patients think that these factors make the procedure seem easier.
“However, for women who want to be sure they don’t get pregnant, the current method of hysteroscopic sterilization still is not ready to be used for everyone,” Creinin says.
There have been no studies comparing the effectiveness of hysteroscopic sterilization with laparoscopic sterilization.
“This limits providers’ and patients’ ability to make informed decisions,” Gariepy says.
Gariepy also points out that unintended pregnancy resulting from sterilization failure can have serious consequences for both women’s quality of life and maternal and neonatal health outcomes, and should be considered a significant adverse event.
“Women choose sterilization specifically to prevent any future pregnancies,” Gariepy says. “If one sterilization method has a much higher risk of pregnancy, women and their doctors need to know that as they consider the overall risks and benefits of the procedure.”
Other study authors include Xiao Xu of Yale and Kenneth Smith of the University of Pittsburgh. The Society of Family Planning supported the study.
Source: UC Davis