For most infants who have had cataract surgery, the use of contact lenses for several years—and an eventual lens implant—may be a better solution than the current standard of care, the intraocular lens implant, new research shows.
A cataract is a clouding of the eye’s natural lens and can be removed through a safe, quick surgical procedure. After cataract removal, most adults and children receive a permanent artificial lens, called an intraocular lens (IOL).Related Articles On Futurity
- University of IllinoisBreast milk compound may build baby’s immunity
- Brown University‘Body burden’ of pollutants for US women
- University of RochesterBrain can trick us into seeing in the dark
This is an option for infants, too, but a new trial shows that the use of contact lenses is safer than, and just as effective as, an IOL for infants under 7 months old.
“When we began this study, the prevailing theory was that IOLs would be the better option for cataract in infants because they correct vision constantly, while contact lenses can be removed or dislodged from the eye.
“But our data suggest that if the family can manage it, contact lenses are the better option until the child gets older,” says Scott Lambert, professor of ophthalmology at Emory University.
“The IATS is a landmark study in the refractive management of infants with cataracts,” says Timothy W. Olsen, chairman of the department of ophthalmology.
“Attempting to estimate the life-long lens implant power and also place an adult lens in a growing infant’s eye is challenging. Families and physicians may now comfortably choose to use contact lenses and wait until children are older when the eye is more mature. The strong evidence, generated by the dedicated investigators in the IATS, helps provide a solid basis for this choice.”Best time for treatment
Although cataracts are often tied to aging, it’s estimated that 1,200–1,600 infants are diagnosed with congenital cataracts (present from birth) each year in the United States. The condition can affect both eyes, but it often affects just one, which is called unilateral cataract.
Published in JAMA Ophthalmology, the new study compares the use of IOLs versus the use of contact lenses during infancy for treating congenital, unilateral cataract.
In the United States, most children with cataract will eventually receive an IOL, but the timing varies, Lambert says. “I’ve had patients wait until they were in college, whereas others will have it done when they are 5 or 6 years old.”
Still, some prior research suggested that using an IOL to treat cataract during infancy can improve long-term visual outcomes. IOLs can also spare babies—and their parents—the discomfort of daily contact lens changes, and reduce the risk of introducing germs into the eye.
But the use of IOLs during infancy has some drawbacks. Surgeons have difficulty judging the right focusing power of the artificial lens for an infant, because it’s a time of rapid eye growth. Also, while IOL implants are typically safe and complication-free for adults, they are more likely to cause post-operative problems for infants.
“Cataract surgery and the use of IOLs for infants have become more sophisticated and more widely practiced over time. In this study, the goal was to determine if the beneficial effects of IOLs outweigh their known complications,” says Donald Everett, director of collaborative clinical research at the National Eye Institute.
Source: Emory University
The post After cataract surgery, contact lenses best for babies appeared first on Futurity.
The zone of overlap between black-capped and Carolina chickadees is moving northward at a rate that matches warming winter temperatures, say researchers.
In a narrow strip—called a hybrid zone—that runs across the eastern US, Carolina chickadees from the south meet and interbreed with black-capped chickadees from the north.
The new study finds that this hybrid zone, a convenient reference point for scientists tracking environmental changes, has moved northward at a rate of 0.7 mile per year over the last decade. That’s fast enough that the researchers added an extra study site partway through their project.
“A lot of the time climate change doesn’t really seem tangible,” says lead author Scott Taylor, a postdoctoral researcher at the Cornell Lab of Ornithology. “But here are these common little backyard birds we all grew up with, and we’re seeing them moving northward on relatively short time scales.”Chickadee DNA
The researchers drew on field studies, genetic analyses, and crowdsourced bird sightings. First, detailed observations and banding data from sites across the hybrid zone provided a basic record of how quickly the zone moved.
Next, genetic analyses revealed in unprecedented detail the degree to which hybrids shared the DNA of both parent species. And then crowdsourced data drawn from eBird, a citizen-science project run by the Cornell Lab of Ornithology, allowed the researchers to expand the scale of the study and match bird observations with winter temperatures.
The researchers analyzed blood samples from 167 chickadees—83 collected in 2000-02 and 84 in 2010-12. Using next-generation genetic sequencing, they looked at more than 1,400 fragments of the birds’ genomes to see how much was black-capped chickadee DNA and how much was Carolina.
The site that had been in the middle of the hybrid zone at the start of the study was almost pure Carolina chickadees by the end. The next site to the north, which Robert Curry of Villanova University, who led the field portion of the study, and his students had originally picked as a stronghold of black-capped chickadees, had become dominated by hybrids.Seven-mile shift
Female Carolina chickadees seem to be leading the charge, Curry says. Field observations show that females move on average about 0.6 miles between where they’re born and where they settle down. That’s about twice as far as males and almost exactly as fast as the hybrid zone is moving.
As a final step, the researchers overlaid temperature records on a map of the overlap zone, drawn from eBird sightings of the two chickadee species. They found the zone of overlap occurred only in areas where the average winter low temperature was between 14 and 20 degrees Fahrenheit.
They also used eBird records to estimate where the hybrid zone had been a decade earlier and found the same relationship with temperature existed then. The only difference was that those temperatures had shifted to the north by about seven miles since 2000.
“The rapidity with which these changes are happening is a big deal,” Taylor says. “Small mammals, insects, and definitely plants are probably feeling these same pressures—they’re just not as able to move in response.”
Addition researchers contributed to the study from Cornell and Villanova universities. The findings appear in Current Biology.
Source: Cornell University
When students repeat a grade, it can spell trouble for their classmates, according to a new study of nearly 80,000 middle-schoolers.
In schools with high numbers of grade-repeaters, suspensions were more likely to occur across the school community. Discipline problems were also more common among other students, including substance abuse, fighting, and classroom disruption.
Public debate typically focuses on how retention affects an individual student’s academic performance, says lead author Clara Muschkin. She and her colleagues decided to take a wider view and consider how holding students back may affect the school as a whole.Related Articles On Futurity
- Vanderbilt UniversityWomen most likely to return to teaching
- Michigan State UniversityEarly vocab lessons are too elementary
- University of MichiganFor teens of jailed parents, prison cycle is hard to break
“The decision to retain students has consequences for the whole school community,” says Muschkin, an associate director of the Duke University Center for Child and Family Policy. “That wider effect is an issue worth considering as we debate this policy.”Discipline trouble
The study by Muschkin, Elizabeth Glennie, and Audrey Beck looked at 79,314 seventh-graders in 334 North Carolina middle schools.
For information on retention and discipline problems, the authors turned to administrative data from the state’s public school system. The authors found that different schools have greatly varying numbers of older and retained students, with significant consequences.
The authors took pains to account for a range of factors that might offer alternative explanations for their findings, including schools’ socioeconomic composition and parents’ educational status. Even after controlling for such factors, the presence of older and retained students was still strongly linked with more discipline problems in the entire group.
For instance, if 20 percent of children in seventh grade were older than their peers, the chance that other students would commit an infraction or be suspended increased by 200 percent.
“There’s a strong relationship here that we think is likely to be causal,” Muschkin says.Peer influence
The study focused on two groups in particular: students who repeated a grade, and students who were a year older than their classmates, on average. When there were more older and retained students present, discipline problems increased for all subgroups in the study, including black and white students and boys and girls. Two groups saw a particularly large jump in discipline problems: white students and girls of all races.
“This finding took us by surprise,” Muschkin says. “These two groups appear to be a bit more affected than others by the influence of older peers.”
In early adolescence, a time of major physical and psychological change, students are particularly vulnerable to peer influence, Muschkin notes. However, more research is needed to understand why some subgroups appear to respond more strongly than others to the influence of their classmates, she says.
Holding students back became a popular educational option as criticism of “social promotion” mounted. The study suggests that since retention has school-wide ramifications, educators should do more to assist older and retained students with their academic struggles; for instance, through tutoring, summer school, and peer mentoring.
“Support for older and retained students is an investment in the achievement and climate of the entire school,” Muschkin says.
The paper appears online in Teachers College Record.
The National Institute on Drug Abuse, through the Duke University Transdisciplinary Prevention Research Center, provided funding for the study. Data was provided by the North Carolina Education Research Data Center at Duke University, with assistance from the North Carolina Department of Public Instruction and financial support from the Spencer Foundation.
Source: Duke University
A new project has tapped the idle processing power of 200,000 computers to simulate the structure of a protein that allows cancer cells to run amok.
Researchers say the work done by Folding@home could lead to new drugs that specifically target the Src kinase protein, thereby easing the side effects of cancer treatment.
The study, which was lead by senior authors Vijay Pande, professor of chemistry, structural biology, and computer science at Stanford University, and Benoît Roux, a professor of biochemistry and molecular biophysics at the University of Chicago, was recently published in the journal Nature Communications.Related Articles On Futurity
- Georgia Institute of TechnologySkeleton-free cartoons move like a jellyfish
- Brown UniversityAlgorithms find 'hot networks' in cancer
- Rice UniversityEpigenetics searches genes for cancer clues
The general role of kinases is to act as an intracellular “molecular switch” that activates other proteins, enabling the cell to carry out its normal duties. Kinases play a particularly important role in regulating cellular growth. Cancer cells corrupt this process and rev up kinase production and activation, causing the cancer to grow and spread unchecked.
To date, there have been relatively few cancer drugs that have successfully targeted and inhibited kinases. The trick is to hit the disrupted kinase at the root of the cancer and to turn it off, without affecting many other similar kinases that are critical, for instance, to heart or kidney cells.
The Src kinase has a specific three-dimensional structure and, like many proteins, its entire structure reconfigures as the protein transforms between its inactive and active states. Scientists can determine the exact structure of these two states using a technique called x-ray crystallography.
These images provide the blueprints for researchers to design drugs that interfere with the protein when it is running out of control. Unfortunately, these two states are fairly similar across the kinase family, making it difficult to target only the kinases being exploited by the cancer.33 petaflops of processing power
On the other hand, while the various configurations that Src kinase takes during its conformational transition between active and inactive could prove useful, those short-lived states are nearly impossible to detect experimentally. But unlike x-ray crystallography, the simulations are able to provide a view of these intermediate conformations. These could prove useful.
“The intermediate states could be differentiable from other kinases,” says Pande, a co-principal investigator at the Simbios Center for Biomedical Computation at Stanford. “We haven’t proven it yet, but these intermediate targets might present the opportunity to design more selective drugs.”
The program predicts these states by combining elegant algorithms with the brute force of the graphics processing units of 200,000 computers, providing more than 33 petaflops of processing power.
The algorithm knows only the protein’s start and end configurations—the active and inactive states—and discovers the various ways the protein could rearrange itself to get from one end-state to the other.
As it runs, certain transitions occur more frequently, increasing the likelihood that they are on the actual path the protein follows in real life. Once the simulation has run enough times, the scientists can identify, with statistical certainty, the most likely order and shape of the transition.New piece of the puzzle
The next step, Pande says, is to conduct experiments that will confirm the existence of these intermediate states.
“If this is really correct, it’s a piece of the puzzle that nobody had before,” Roux says “This is one of the first times that computation can give you something that you can almost not get from pure experiment. It would certainly shake things up from a drug development standpoint.”
Pande thinks that this work indicates that the program has become robust enough to move beyond just identifying protein structures, toward simulating how all sorts of molecular interactions occur. A strength of the program, he says, is the length of the calculations it can carry out.
Typically, a supercomputer can crunch out 100 nanoseconds, or maybe a microsecond, of continuous data simulations. By splitting the same work over hundreds of thousands of computers, though, the program can compute a few hundred microseconds, as was the case with Src kinase.
“If you row a boat from Europe to America, and you can only go a mile, you’re never going to discover anything,” Pande says. “But if you can get a few thousand miles offshore, you’ll see something exciting and different from what you’ve seen before.”
Source: Stanford University
Preschoolers can be smarter than college students at figuring out how unusual toys and gadgets work because they’re more flexible and less biased than adults in their ideas about cause and effect, according to new research.
The findings suggest that technology and innovation can benefit from the exploratory learning and probabilistic reasoning skills that come naturally to young children, many of whom are learning to use smartphones even before they can tie their shoelaces.
The findings also build upon the researchers’ efforts to use children’s cognitive smarts to teach machines to learn in more human ways.
“As far as we know, this is the first study examining whether children can learn abstract cause and effect relationships, and comparing them to adults,” says University of California, Berkeley, developmental psychologist Alison Gopnik, senior author of the paper published online in the journal Cognition.Related Articles On Futurity
- Emory UniversitySchool segregation worsens learning gap
- Carnegie Mellon UniversityTo put info together, work with others online
- Carnegie Mellon UniversityThere are 205 trillion ways to teach people to learn
Using a game they call “Blickets,” the researchers looked at how 106 preschoolers (ages 4 and 5) and 170 college undergrads figured out a gizmo that works in an unusual way.
They did this by placing clay shapes (cubes, pyramids, cylinders, etc.), on a red-topped box to see which of the widgets—individually or in combination—could light up the box and play music. The shapes that activated the machine were called “blickets.”
What separated the young players from the adult players was their response to changing evidence in the blicket demonstrations. For example, unusual combinations could make the machine go, and children caught on to that rule, while the adults tended to focus on which individual blocks activated the machine even in the face of changing evidence.
“The kids got it. They figured out that the machine might work in this unusual way and so that you should put both blocks on together. But the best and brightest students acted as if the machine would always follow the common and obvious rule, even when we showed them that it might work differently,” writes Gopnik in her forthcoming column in The Wall Street Journal.
Overall, the youngsters were more likely to entertain unlikely possibilities to figure out “blicketness.” This confirmed the researchers’ hypothesis that preschoolers and kindergartners instinctively follow Bayesian logic, a statistical model that draws inferences by calculating the probability of possible outcomes.
“One big question, looking forward, is what makes children more flexible learners—are they just free from the preconceptions that adults have, or are they fundamentally more flexible or exploratory in how they see the world?” says Christopher Lucas, lead author of the paper and a lecturer at the University of Edinburgh. “Regardless, children have a lot to teach us about learning.”
Other co-authors of the study are Thomas Griffiths and Sophie Bridgers of the UC Berkeley department of psychology.
Source: UC Berkeley
Soldiers returning home from war may find themselves engaged in a battle of “warring identities” as they transition to civilian life.
Much of the research on post-combat mental health of veterans focuses on problems like post-traumatic stress disorder (PTSD) and major depression. A new paper focuses instead on how identity conflicts between being a soldier and a civilian can manifest as mental distress.Related Articles On Futurity
- University of California, DavisMigration mentally taxing for Mexicans
- University of WarwickPregnancy stress may lead to bullied kids
- University of Texas at AustinFacebook profiles reveal true self
“You can’t really do research on veterans mental health without some kind of dialogue on PTSD, but we’re trying to move away from the standard PTSD framework to contextualize the veteran experience and get a more accurate picture of what vets returning from war look like as opposed to just looking at the medical side of things,” says R. Tyson Smith, visiting assistant professor of sociology at Brown University.
To get this picture, Smith and co-author Gala True, from the Center for Health Equity Research and Promotion in Philadelphia, conducted lengthy interviews with 26 veterans who had recently served in Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF).
Twelve of the interviews were done with veterans who were not routinely receiving health care through the Department of Veterans Affairs and 14 of the interviews were done with veterans who were.
For the study, published in Society and Mental Health, the researchers used a semi-structured interview style consisting of non-directive, open-ended questions such as “Tell me about your experience while deployed,” “Whom, if anyone, do you speak to about your war experiences?” and “What, if any, issues have you been dealing with since your return?”‘Less-than-one-percent’
Interviews typically lasted several hours and were recorded. The veterans interviewed represented all four branches of the military and all had been deployed at least once in OIF or OEF.
An analysis of the veterans’ reported experiences shows that many had commonalities to their stories, in particular the sense that their combat experience was something that few could understand and that they felt “alien” among family and friends.
Confounding the issue is the fact that the current wars are much less visible than previous wars like Vietnam, where many Americans knew someone who served. Many of today’s soldiers refer to themselves as the “less-than-one-percent.”
“Within our culture, there’s this dichotomy of the valorous hero on one side and the battered, hair-trigger vet on the other. And the true experience is often not at those extremes and leaves veterans feeling if not judged, misunderstood,” Smith says.Safe in combat
Many of the soldiers also spoke of combat as a time when they felt “safe,” even more so than when returning to the United States. The military is a highly regimented institution where soldiers develop identities that give them a sense of order, obedience, and collectivism, the authors write. When they return home, “behavior is suddenly voluntary and the lack of regimentation (and a larger sense of purpose) is a basis of distress.”
Many soldiers have difficulty creating an identity when they return to civilian life, Smith says.
“You’re making sense of who you are again and that’s a process that we all do in life on a regular basis, but in this case you return and it’s distressing trying to make sense of yourself after this combat experience.
“Within the total institution of the military, while there are threats and harms, it was also profoundly shaping who you were and your sense of self. That order and identity is no longer part of the everyday after exiting the service.”Getting veterans to talk
Some veterans reported feeling like they were starting all over again when they returned home. Others felt distanced from their civilian relationships, afraid to disclose too much about their combat experience for fear of being judged or misunderstood.
Withholding such information can stunt a veteran’s ability to reintegrate into civilian life by straining those relationships with people who would be their primary support systems, the authors say.
These “warring identities” can act as a catalyst for or even present as mental health behavioral problems. More attention needs to be given to this psychological process as it plays out when soldiers return from war and to broadening the framework surrounding soldiers’ mental health issues beyond diagnosable illnesses such as PTSD and depression.
While expansion of social support programs is always needed, expansion of dialogue around the veteran experience would be a good first step, Smith says.
“I think that one of the larger scale issues is that we have a lot of difficulty talking about the realities of war. There’s a lot that can be done to enhance the understanding and conversation around what are definitely difficult topics.
“Seeing, handling, coping with dismembered children is hard to think about for all of us, and we really want to turn a blind eye to it. But turning that blind eye is part of the continued struggle.”
Source: Brown University
A new robotic drumming prosthesis has two drumsticks—the musician’s arm and muscle sensors control one, and the other “listens” to the music and improvises.
A new robot can be attached to amputees, allowing its technology to be embedded into humans. The robotic drumming prosthesis has motors that power two drumsticks. The first stick is controlled both physically by the musician’s arm and electronically using electromyography (EMG) muscle sensors. The other stick “listens” to the music being played and improvises.Related Articles On Futurity
- Rice UniversityBrain cap morphs thought into motion
- California Institute of Technology'Spider' molecules behave like nanorobots
- Stanford UniversityCell phones at center of global music-making
“The second drumstick has a mind of its own,” says Professor Gil Weinberg, founding director of the Georgia Tech Center for Music Technology. “The drummer essentially becomes a cyborg. It’s interesting to see him playing and improvising with part of his arm that he doesn’t totally control.”
The prosthesis was created for Jason Barnes, a drummer who was electrocuted two years ago and lost his right arm below the elbow. The Atlanta Institute of Music and Media student built his own prosthetic device shortly after the accident. It wasn’t very flexible. He could bang the drums by moving his elbow up and down, but couldn’t control the speed or bounce of the stick without a wrist or fingers.
That’s when Weinberg stepped in to create a single-stick device with sensors that respond to Barnes’ bicep muscles.
“Now I can flex and send signals to a computer that tightens or loosens the stick and controls the rebound,” says Barnes.The extra stick
Weinberg, who has already built a robotic percussionist and marimba player that use computer algorithms to improvise with human musicians, took the prosthesis a step further. He added the second stick and gave it a “musical brain.”
“Jason can pull the robotic stick away from the drum when he wants to be fully in control,” says Weinberg. “Or he can allow it to play on its own and be surprised and inspired by his own arm responding to his drumming.”
Regardless of how he uses the extra stick, the new prosthetic has already given Barnes capabilities he hasn’t had since before the amputation.
“Music is very time sensitive. You can hear the difference between two strokes, even if they are a few milliseconds apart,” says Weinberg. “If we are able to use machine learning from Jason’s muscles (and in future steps, from his brain activity) to determine when he intends to drum and have the stick hit at that moment, both arms can be synchronized.”Astronauts and surgeons
Weinberg says such robotic synchronization technology could potentially be used in the future by fully abled humans to control an embedded, mechanical third arm during time-sensitive operations. For example, Weinberg’s anticipation algorithms could be used to help astronauts or surgeons perform complex, physical tasks in synchronization with robotic devices.
For Barnes, it’s all about the music. Because an embedded chip can control the speed of the drumsticks, the prosthesis can be programmed to play two sticks at a different rhythm. It can also move the sticks faster than humanly possible.
“I’ll bet a lot of metal drummers might be jealous of what I can do now,” he says. “Speed is good. Faster is always better.”
Barnes will play with the device for the first time publicly on March 22 at the Robotic Musicianship Demonstration and Concert at Kennesaw State University’s Bailey Performance Center. The free event is part of the Atlanta Science Festival.
The National Science Foundation funds Weinberg’s research.
Source: Georgia Tech
For the first time, astronomers have used the same imaging technology found in a digital camera to take a picture of a planet far from our solar system with an Earth-based telescope.
While the technology, which replaces an infrared detector, still has a very long way to go, scientists say the accomplishment brings them a small step closer to what will be needed to image Earth-like planets around other stars.
“This is an important next step in the search for exoplanets because imaging in visible light instead of infrared is what we likely have to do if we want to detect planets that might be suitable for harboring life,” says Jared Males, a NASA Sagan Fellow in the department of astronomy and Steward Observatory at the University of Arizona and lead author of a paper to be published in The Astrophysical Journal.
Even though the image was taken at a wavelength that is just shy of being visible to the human eye, the use of a digital camera-type imaging sensor—called a charge-coupled device or CCD—opens up the possibility of imaging planets in visible light, which has not been possible previously with Earth-based telescopes.
Related Articles On Futurity
- University of PittsburghMilky Way is 'white as snow'
- Cornell UniversityWhy track Saturn's 'propeller moons'?
- University of Maryland2-layer graphene lets device see more light
“This is exciting to astronomers because it means we now are a small step closer to being able to image planets outside our solar system in visible light,” says co-author Laird Close, professor of astronomy.
All the other Earth-based images taken of exoplanets close to their stars are infrared images, which detect the planets’ heat. This limits the technology to gas giants—massive, hot planets young enough to still shed heat.
In contrast, older, possibly habitable planets that have cooled since their formation don’t show up in infrared images as readily, and to image them, astronomers will have to rely on cameras capable of detecting visible light, Close says.
“Our ultimate goal is to be able to image what we call pale blue dots. After all, the Earth is blue. And that’s where you want to look for other planets: in reflected blue light.”Mount Everest vs. a molehill
The photographed planet, called Beta Pictoris b, orbits its star at only nine times the Earth-Sun distance, making its orbit smaller than Saturn’s. In the team’s CCD images, Beta Pictoris b appears about 100,000 times fainter than its host star, making it the faintest object imaged so far at such high contrast and at such relative proximity to its star.
The new images of this planet helped confirm that its atmosphere is at a temperature of roughly 2600 degrees Fahrenheit (1700 Kelvin). The team estimates that Beta Pictoris b weighs in at about 12 times the mass of Jupiter.
“Because the Beta Pictoris system is 63.4 light years from Earth, the scenario is equivalent to imaging a dime next right next to a lighthouse beam from more than four miles away,” Males says. “Our image has the highest contrast ever achieved on an exoplanet that is so close to its star.”
The contrast in brightness between the bright star and the faint planet is similar to the height of a 4-inch molehill next to Mount Everest, Close says.
In addition to the host star’s overwhelming brightness, the astronomers had to overcome the turbulence in Earth’s atmosphere, which causes stars to twinkle and telescope images to blur.
The success reported here is mostly due to an adaptive optics system that eliminates much of the atmosphere’s effect. The Magellan Adaptive Optics technology is very good at removing this turbulence, or blurring, by means of a deformable mirror changing shape 1,000 times each second in real time.A planet, not a speckle of noise
Adaptive optics have been used for more than 20 years at observatories in Arizona, most recently at the Large Binocular Telescope, and the latest version has now been deployed in the high desert of Chile at the Magellan 6.5-meter telescope.
The team also imaged the planet with both of MagAO’s cameras, giving the scientists two completely independent simultaneous images of the same object in infrared as well as bluer light to compare and contrast.
“An important part of the signal processing is proving that the tiny dot of light is really the planet and not a speckle of noise,” says Katie Morzinski, who is also a Sagan Fellow and member of the MagAO team.
“I obtained the second image in the infrared spectrum—at which the hot planet shines brightly—to serve as an unequivocal control that we are indeed looking at the planet. Taking the two images simultaneously helps to prove the planet image on the CCD is real and not just noise.”
“In our case, we were able to record the planet’s own glow because it is still young and hot enough so that its signal stood out against the noise introduced by atmospheric blurring,” Males says.
“But when you go yet another 100,000 times fainter to spot much cooler and truly earthlike planets, we reach a situation in which the residual blurring from the atmosphere is too large and we may have to resort to a specialized space telescope instead.”
Development of the MagAO system was supported by the National Science Foundation. The Magellan telescopes are operated by a partnership of the Carnegie institute, the University of Arizona, Harvard University, Massachusetts Institute of Technology, and the University of Michigan.
The work of NASA Sagan Fellows Jared Males and Katie Morzinski was performed in part under contract with the California Institute of Technology funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute.
Source: University of Arizona
The post Digital camera technology snaps exoplanet from Earth appeared first on Futurity.
A recent analysis found that frequent experiences of racism were associated with a higher risk of obesity among African-American women.
The findings, which appear online in the American Journal of Epidemiology, show the relationship between racism and obesity was strongest among women who reported consistently high experiences of racism over a 12-year period.
Researchers based the study on data from the Black Women’s Health Study, a longitudinal study that enrolled 59,000 African-American women in 1995 and has followed them continually.Related Articles On Futurity
- Yale UniversityBPA in plastics raises breast cancer risk
- University of MichiganPTSD linked to early, low-weight births
- Brown UniversityVoters savvy to newspaper bias
Rates of obesity in the United States have increased rapidly over the past few decades with the greatest increases reported for African-American women. Approximately half of African-American women are currently classified as obese.
Obesity is a risk factor for numerous health conditions including cardiovascular diseases, type 2 diabetes, orthopedic problems, and death.
Racism is a form of psychosocial stress that African Americans experience disproportionately. Experiences of racism could contribute to obesity because both animal and human data indicate that chronic exposure to stress can result in dysregulation of important neuroendocrine functions which can in turn influence the accumulation of excess body fat.
The Black Women’s Health Study collected information on lifestyle factors, experiences of racism, height and weight, and other factors using biennial questionnaires.
The participants were asked in 1997 and in 2009 to rate the frequency of “everyday” experiences of racism, such as receiving poorer service in restaurants and stores, and if they had been treated unfairly because of their race on the job, in housing, or by the police (“lifetime” racism).
The analyses were restricted to women under the age 40 at the beginning of follow-up because most adult weight gain occurs during the reproductive years.
The investigators found that women in the highest category of reported everyday racism in both 1997 and 2009 were 69 percent more likely to become obese compared to those in the lowest category at both intervals. Women who reported more lifetime racism were also at increased risk of obesity.
“Experiences of racism may explain in part the high prevalence of obesity among African-American women,” explains Yvette C. Cozier, assistant professor of epidemiology at Boston University who led the analyses.
She suggests that workplace- and community-based programs to combat racism and interventions to reduce racism-induced stress could be an important component of strategies for prevention of obesity, especially in communities at high risk.
The Aetna Foundation and the National Cancer Institute supported the study.
Source: Boston University
The presence—or absence—of complications following surgery is a strong indicator of which patients are likely to be readmitted to the hospital in the 30 days following their procedure, a new study shows.
Predicting which patients are most likely to experience complications using a simple online tool may allow healthcare professionals to flag patients at high risk of readmission in real time and alter care to reduce expensive trips back to the hospital.Related Articles On Futurity
- University of California, DavisIn rural ERs, kids get better care with telemedicine
- University of RochesterStroke deaths: A valid way to grade hospitals?
- Yale UniversityHospital transfers too slow for heart patients
A new study examined more than 142,000 patients who had non-cardiac surgery using the American College of Surgeons National Surgical Quality Improvement Program database.
After controlling for severity of disease and surgical complexity, analyses showed that the rate of unplanned 30-day readmissions was approximately 78 percent for patients with any complication diagnosed following discharge from the hospital. Conversely, the rate of unplanned 30-day readmissions was less than 5 percent for patients without any complications.
Hospitals don’t currently have a way to identify surgical patients who are at high risk for unplanned re-hospitalizations. But, there an online tool—the American College of Surgeon’s Surgical Risk Calculator—allows healthcare professionals to enter patient information like age, body mass index, and smoking status and get an estimate of the patient’s risk of complications following surgery.
“If a patient’s predicted risk of complications is high, which we’ve shown puts them at greater risk of readmission, a physician might decide to move the patient to the intensive care unit or a step-down unit after surgery, as opposed to a regular hospital unit that manages less sick patients,” says Laurent G. Glance, lead study author and professor of anesthesiology and public health sciences at the University of Rochester School of Medicine and Dentistry.
“This information could also help with staffing. Instead of taking care of eight patients, a nurse might be assigned to monitor just two or three high-risk patients in an effort to prevent complications that could lead to more hospitalizations down the road.”
Patients at high risk of complications could also be more closely monitored after they are discharged from the hospital and sent home in order to uncover and treat surgical complications earlier in their course, before patients require re-hospitalization.Hospital report cards
Hospital readmissions are believed to be an indicator of inferior care and are the focus of efforts by the Centers for Medicare and Medicaid Services to reduce health care cost and improve quality.
Researchers believe that measuring the end products of health care, such as death, complications and re-hospitalizations, and reporting that information after the fact to health care professionals, patients and third-party payers in the form of report cards, may not be sufficient to achieve the best possible outcomes.
“For physicians, it can be hard to know what to do with report card data,” says Glance, who is also a cardiac anesthesiologist at Strong Memorial Hospital. “We need to provide healthcare teams with information they can use before, not after complications and re-hospitalizations occur.”
Information about a patient’s likelihood of complications could be added to his or her electronic medical record and used before, during, and following surgery to help guide clinical decision making, he says.
Published in JAMA Surgery, the study is the first to examine the association between the risk of complications after surgery and the rate of unplanned re-hospitalizations in a large, nationally representative sample of patients undergoing general surgery.
Incorporating information from the American College of Surgeons National Surgical Quality Improvement Program and Surgical Risk Calculator into the daily workflow of healthcare teams in hospitals across the country could help achieve the Center for Medicare and Medicaid Services’ goal to reduce hospital readmissions and generate savings in health care costs in the coming years, researchers say.
Researchers from the University of Vermont College of Medicine; University of California, Irvine; and RAND Health contributed to the study, which was funded by the department of anesthesiology at the University of Rochester.
Source: University of Rochester
Scientists have successfully genetically engineered the immune cells of 12 HIV positive patients to resist infection, and decreased the viral loads of some patients taken off antiretroviral drug therapy (ADT) entirely—including one patient whose levels became undetectable.
The study, which appears today in the New England Journal of Medicine, is the first published report of any gene editing approach in humans.
“This study shows that we can safely and effectively engineer an HIV patient’s own T cells to mimic a naturally occurring resistance to the virus, infuse those engineered cells, have them persist in the body, and potentially keep viral loads at bay without the use of drugs,” says senior author Carl H. June, professor in immunotherapy at the University of Pennsylvania Perelman School of Medicine.
“This reinforces our belief that modified T cells are the key that could eliminate the need for lifelong ADT and potentially lead to functionally curative approaches for HIV/AIDS.”
June and his colleagues, including Bruce L. Levine, professor in cancer gene therapy and the director of the Clinical Cell and Vaccine Production Facility at Penn, used zinc finger nuclease (ZFN) technology to modify the T cells in the patients—a “molecular scissors,” of sorts, to mimic the CCR5-delta-32 mutation.Related Articles On Futurity
- Emory UniversityHow Ebola sneaks past immune system
- Emory UniversityDomestic violence, HIV go hand in hand
- Emory UniversityImmune system 'trainer' cells don't quit
That rare mutation is of interest because it provides a natural resistance to the virus, but in only one percent of the general population. By inducing the mutations, the scientists reduced the expression of CCR5 surface proteins. Without those, HIV cannot enter, rendering the patients’ cells resistant to infection.T cell infusions
For the study, the team infused the modified cells –known as SB-728-T—into two cohorts of patients, all treated with single infusions—about 10 billion cells—between May 2009 and July 2012. Six were taken off antiretroviral therapy altogether for up to 12 weeks, beginning four weeks after infusion, while six patients remained on treatment.
Infusions were deemed safe and tolerable, the authors report, and modified T cells continued to persist in the patients during follow up visits. One week after the initial infusion, testing revealed a dramatic spike in modified T cells inside the patients’ bodies. While those cells declined over a number of weeks in the blood, the decrease of modified cells was significantly less than that of unmodified T cells during ADT treatment interruption.
Modified cells were also observed in the gut-associated lymphoid tissue, which is a major reservoir of immune cells and a critical reservoir of HIV infection, suggesting that the modified cells are functioning and trafficking normally in the body.
The study also shows promise in the approach’s ability to suppress the virus. The viral loads (HIV-RNA) dropped in four patients whose treatment was interrupted for 12 weeks. One of those patients’ viral loads dropped below the limit of detection; interestingly, it was later discovered that the patient was found to be heterozygous for the CCR5 delta-32 gene mutation.
“Since half the subject’s CCR5 genes were naturally disrupted, the gene editing approach was building on the head start provided by inheriting the mutation from one parent,” says Levine. “This case gives us a better understanding of the mutation and the body’s response to the therapy, opening up another door for study.”Protect the T cells
Therapies based on the CCR5 mutation have gained steam over the last six years, particularly after a man known as the Berlin Patient was “functionally” cured. Diagnosed with acute myeloid leukemia (AML), he received a stem cell transplant from a donor who had the CCR5 mutation in both alleles (from both parents) and has remained off ADT since 2008.
Researchers are attempting to replicate this phenomenon because allogeneic transplants—which carry a high mortality risk and require lengthy hospitalizations—are not a practical solution for HIV patients who do not have blood cancers. Nor are they effective in ridding the body of HIV unless the donor has the mutated gene in both alleles, as shown recently in two Boston patients who were thought to have been “functionally” cured from transplants, only to see their viral loads spike.
Though disappointing to the research community, the Boston patients’ results highlight key factors when combating the virus.
“Those cases emphasize the need to protect T cells from the virus,” says Pablo Tebas, director of the AIDS Clinical Trials Unit at the Penn Center for AIDS Research, one of two centers where the study was completed.
“The Boston cases show us that for the Berlin patient, it was not the chemotherapy or infusion of a donor’s stem cells that staved off the HIV; it was the protection of the T cells by the lack of CCR5. Those procedures couldn’t completely eliminate the reservoir of the HIV virus, and when the virus came back the T cells were susceptible to infection. The ZFN approach protects T cells from HIV and may be able to almost completely deplete the virus, as those cells are still functional.”
Further clinical trials will evaluate greater numbers of modified T cells in a larger cohort of patients, as well as strategies to increase the persistence of more cells in the body to achieve a therapeutic effect.
A NIAID Program Project Grant, the Penn Center for AIDS Research, Clinical Trials Unit, and Sangamo BioSciences funded the study.
Additional researchers at Penn Medicine, the Albert Einstein College of Medicine, and scientists from Sangamo BioSciences, which developed the zinc finger nuclease (ZFN) technology, the T cell therapy approach used in the clinical trial, co-authored the phase I study.
Patients seeking information about Penn Medicine’s gene therapy trials for HIV can call Joe Quinn in the AIDS Clinical Trials Unit at 215-349-8091.
Source: University of Pennsylvania
Scientists have unlocked the blueprint of potassium ion channels, key structures in the heart that help regulate heart contractions.
A single heartbeat is the slow expanding and contracting of the heart muscle. It is controlled, in part, by a series of channels on the surface of heart cells that regulate the movement of different ions into and out of the cells.Related Articles On Futurity
- University of FloridaHow to decide if a daily aspirin is harmful
- Washington University in St. Louis3D printed implant could predict heart attacks
- Duke UniversityClone cells dominate to build heart muscle
The potassium ion channel is critical to ending each heart contraction and is made up of the proteins Q1 and E1. Q1s create the pore that the potassium flows through and the E1s control how slowly that pore opens and closes, how many channels are on the cell surface of each cell and how they are regulated by drugs.
For years, scientists have debated how many KCNE1 proteins are required to build one of these channels, theorizing anywhere between one and 14.
Now, researchers at Brandeis University have found that these channels are built with two E1s. Understanding the construction of this channel is key to understanding life-threatening heart conditions, such as arrhythmias, and developing drugs to threat those conditions.
The researchers observed E1 in live, mammalian cells and counted the proteins in individual channels, which had not been done before in this area of research.
Because this mechanism has been so widely debated, Steve Goldstein, professor of biochemistry, and his team used three different means to count E1—including tagging them with different fluorescent colors and using a scorpion toxin to bind to Q1. Each time, the team got the same results.
This report challenges a previous study—the findings of which are currently being used in drug development trials and animal models—that anywhere between one and four E1s are required per channel.
Goldstein and his team hope their findings may help create more effective models to study heart conditions and their treatment.
The findings appear in the Proceedings of the National Academy of Sciences. The National Institutes of Health funded the work.
Source: Brandeis University
DVDs and other media that claim to be able to teach babies to read don’t work, a new study shows.
“While we cannot say with full assurance that infants at this age cannot learn printed words, our results make clear they did not learn printed words from the baby media product that was tested,” says Susan Neuman, professor in the department of teaching and learning at New York University.Related Articles On Futurity
- Cornell UniversityCholine does pregnant mom (and baby) good
- Football ads fire up kids' hostility
- University of California, BerkeleyDeath rates for baby girls spike after typhoons
There is one undeniable effect of these products—but it’s on parents, not babies. In exit interviews, parents expressed the belief that their babies were learning to read and that their children had benefited from the program in some areas of vocabulary development.
“It’s clear that parents have great confidence in the impact of these products on their children,” Neuman says. “However, our study indicates this sentiment is misplaced.”
For the study, published in the Journal of Educational Psychology, researchers examined 117 infants, aged nine to 18 months, who were randomly assigned to treatment and control groups. Children in the treatment condition received a baby media product, which included DVDs, word and picture flashcards, and flip books to be used daily over a seven-month period.
Children in the control condition did not receive these materials. Over the course of seven months, the researchers conducted a home visit, four laboratory visits, and monthly assessments of language development.Early reading skills?
To test children’s emerging skills in the laboratory, the researchers examined the capacity to recognize letter names, letter sounds, vocabulary, words identified on sight, and comprehension.
A combination of eye-tracking tasks and standardized measures were used to study outcomes at each stage of development. Using a state-of-the art eye-tracking technology, which follows even the slightest eye movements, researchers were able to closely monitor how the infants distributed their attention and how they shifted their gaze from one location to another when shown specific words and phrases.
The results, which included criterion and standardized measures of emergent and early reading skills, showed no differences between the infants exposed to baby media and the control group on 13 of the 14 assessments.
The only assessment that showed a difference was parents’ beliefs that their child was learning new words despite countervailing evidence from a standardized measure indicating no differences between groups.
Researchers from the University of Michigan, the University of Toronto, and Lakehead University contributed to the study.
A new study shows that adolescent girls who experienced maltreatment in the past year and were willing to talk about their painful experiences and their thoughts and emotions, were less likely to have PTSD symptoms one year later. Those who tried to avoid painful thoughts and emotions were significantly more likely to exhibit PTSD symptoms down the road.
“Avoidance is something we all do,” says Chad Shenk, assistant professor of human development and family studies at Penn State. “Sometimes it is easier not to think about something. But when we rely on avoidance as a coping strategy…that is when there may be negative consequences.”
Approximately 40 percent of maltreated children develop PTSD at some point in their lives. Shenk and colleagues wanted to identify the factors that keep the remaining 60 percent from experiencing the disorder.Related Articles On Futurity
- University of LeedsPush to be perfect burns out young athletes
- Texas A&M UniversitySave soldiers by stretching 'golden hour'
- University of ChicagoDevice fails to lower risk of surgery recall
“Children and adolescents react very differently to abuse, and we don’t yet know who is going to develop PTSD and who won’t,” Shenk says. “What factors explain who will develop PTSD and who will not? This study attempted to identify those causal pathways to PTSD.”
One theory holds that PTSD is caused by dysregulation in multiple neurobiological processes, including cortisol deficiencies or heightened suppression of respiratory sinus arrhythmia—each of which affects how individuals can remain calm during a time of stress.
There are also psychological theories, which include experiential avoidance, the tendency to avoid negative feelings like fear, sadness, or shame. The new study, published in the journal Development and Psychopathology, tests these theories by creating one statistical model that includes all of them to see which factors best account for PTSD symptoms.
“It would be inappropriate to say that these are competing theories, but in the literature they are often treated that way,” Shenk says. “Investigators are actually focused on different levels of analysis, one neurological and one psychological, and I think these processes are related.”
At three different points over two years, Shenk and his research team examined girls who suffered from at least one of the three types of child maltreatment—physical abuse, sexual abuse, or neglect—during the previous year. The 51 maltreated adolescent girls were compared to 59 adolescent girls who had not experienced maltreatment.
Figuring out which processes conferred the greatest risk for PTSD could provide a basis for prevention and clinical intervention programs, Shenk says.
“If we can find what the cause or risk pathway is, then we know what to target clinically.”
Researchers from the University of North Carolina at Chapel Hill and the University of Cincinnati College of Medicine contributed to the study, which was supported by the University of Cincinnati, an Institutional Clinical and Translational Science Award, and the National Institute of Child Health and Human Development.
Source: Penn State
Atypical development can be detected as early as 12 months of age among the siblings of children with autism spectrum disorder, new research shows.
Published online in the Journal of the American Academy of Child and Adolescent Psychiatry, the study found that close to half of the younger siblings of children with autism spectrum disorder (ASD) develop in an atypical fashion, with 17 percent developing ASD and another 28 percent showing delays in other areas of development or behavior.Related Articles On Futurity
- University of California, DavisTeen brain, body mature in sync
- University of ChicagoBabies seem to know who's going to be buddies
- University of North Carolina at Chapel HillAutism linked to enzyme that untangles DNA
Among the 28 percent of children with older siblings with ASD who showed delays in other areas of development, differences were identified in their social, communication, cognitive, or motor development by 12 months.
The most common deficits were in the social-communication domain, such as extreme shyness with unfamiliar people, lower levels of eye contact, and delayed pointing.
The research suggests that parents and clinicians should be vigilant for such symptoms early on among the siblings of children with autism, in order to take full advantage of opportunities for targeted early intervention to improve those children’s outcomes.
“Having a child in the family with autism spectrum disorder means that subsequent infants born into that family should be regularly screened for developmental and behavioral problems by their pediatricians,” says Sally Ozonoff, study lead author and professor of psychiatry and behavioral sciences at the UC Davis MIND Institute.
“This research should give parents and clinicians hope that clinical symptoms of atypical development can be picked up earlier, so that we can, perhaps, reduce some of the difficulties that these families often face by intervening earlier.”
The study was conducted in 294 infant siblings of children with autism spectrum disorder and 116 infant siblings of children with typical development. All of the study participants were enrolled prior to 18 months of age. Data on the children’s development was collected at 6, 12, 18, 24, and 36 months of age using a variety of standard developmental tests for autism symptoms.
“Good clinical practice suggests that when children are showing atypical development they and their families should be provided with information about the child’s difficulties, clinical reports when practical, and referrals to local service providers,” Ozonoff says.
“The intervention approaches need to be chosen based on each child’s profile of strengths and weaknesses and each family’s goals and priorities.”
Other study authors contributed from UC Davis, UCLA, and Purdue University. The National Institute of Mental Health supported the study.
Source: UC Davis
The post Early tests may help siblings of children with autism appeared first on Futurity.
A single gene well known for its critical role in sexual differentiation in insects also regulates the complex wing patterns, colors, and structures required for mimicry in swallowtail butterflies.
“Conventional wisdom says that it should be multiple genes working together to control the whole wing pattern of a butterfly,” says Marcus Kronforst, assistant professor of ecology and evolution at the University of Chicago. “But in this case, it’s just this one. This single gene that controls sexual differentiation has been co-opted to do a totally new job.”Related Articles On Futurity
- New York UniversityAnts take in smells with hi-def sniffers
- University of Texas at AustinDrowsy bees can't dance
- Penn StateBeetles trick plants with microbe-laced vomit
Studied as an example of natural selection for centuries, wing pattern mimicry in butterflies enables non-toxic species to mimic the pattern, color, and shape of a toxic species’ wings to deter predation. A single region of the genome regulates this process in some swallowtail butterflies.
Due to the complexity of forms involved with mimicry, researchers have assumed this region contained a “supergene”—multiple tightly-linked genes, each controlling a subset of the wing pattern. However, little was known about this hypothesized mimicry supergene.
To identify its function, Kronforst and his team studied Papilio polytes, an Asian swallowtail butterfly species that displays sex-limited mimicry. Females possess one of four different wing patterns, three of which mimic toxic species, while the remaining female form and all males remain non-mimetic.
Through a genetic mapping process that involved mating butterflies of differing wing patterns and comparing the genomes of around 500 offspring, the team identified five possible genes involved in mimicry. They then sequenced the genomes of 30 butterflies, evenly split between mimetic and non-mimetic, and looked for correlations between these specific genes and wing pattern.Mimicry in butterflies
To their surprise, only one, doublesex, showed an association. Well established as a gene that controls sexual differentiation in insects, doublesex functions through alternative splicing. When copied into messenger RNA, it is cut and rearranged into different isoforms, which then go on to instruct cells whether they should be male or female.
Doublesex is also alternatively spliced into multiple isoforms in Papilio polytes. Two in particular were expressed at extremely high levels in the wings of mimetic butterflies when compared to non-mimetic females. Tracing the doublesex protein from caterpillar to chrysalis to butterfly, the team found expression of doublesex overlaps exactly with wing pattern.
“When you look at the wing tissue in a chrysalis five days after it forms the pupa, it’s just a floppy piece of white tissue,” Kronforst says. “But when you look at where doublesex is being manufactured on the wing, it looks just like the future adult wing pattern.”
How one gene controls so many different functions remains unclear. Kronforst suggests that noncoding, regulatory DNA that controls when and where doublesex is expressed may play a role. The team also found that in mimetic butterflies, the doublesex gene is inverted on the genome.Just the first step
This inversion eliminates the possibility of recombination—alleles will remain distinct from each other and accumulate differing mutations. This has led to structural differences in the doublesex protein between mimetic and non-mimetic butterflies. Because doublesex is a transcription factor and activates other genes, the researchers believe these differences may also contribute to wing pattern variation.
“We’ve illustrated the genetic basis of female-limited mimicry in these butterflies,” says Wei Zhang, postdoctoral fellow and a lead author. “But this is just the first step. How doublesex became involved in this process is still uncertain, and requires further study.”
Study lead author Krushnamegh Kunte of the National Center for Biological Sciences in Bengaluru, India, and a former postdoctoral fellow in the Kronforst lab, anticipates future research will determine if this type of phenomenon will be found in other species.
“Across animal species, we find examples where polymorphisms occur in one sex or the other,” he says. “We’re studying it in the context of mimicry, but it’s possible that this sex differentiation pathway that we found in butterflies could be a pathway that’s more broadly important for sex-limited polymorphism.”
The National Science Foundation supported the research.
Source: University of Chicago
Carbohydrate molecules may serve as signals for cancer, pointing to new ways in which sugars can be used to look at the inner workings of cells.
“Carbohydrates can tell us a lot about what’s going on inside of a cell, so they are potentially good markers for disease,” says Lara Mahal, an associate professor in New York University’s Department of Chemistry and the study’s corresponding author. “Our study reveals how cancer cells produce certain ‘carbohydrate signatures’ that we can now identify.”
Carbohydrates, or glycans, are complex cell-surface molecules that control multiple aspects of cell biology, including cancer metastasis. But less understood is the link between categories of cells and corresponding carbohydrate structures. That is, what do certain carbohydrates on a cell’s surfaces tell us about its characteristics and inner workings or, more succinctly, how do you read a code backwards?Related Articles On Futurity
- University of LeedsVirus trained to seek and destroy cancer
- Brown UniversityFrom DNA clash, ‘double whammy’ for flies
- University of California, BerkeleyCancer-seeking nanoprobes pack a punch
In the study published in the Proceedings of the National Academy of Sciences, the researchers examined the role of microRNA, non-coding RNA that are regulators of the genome. Specific miRNAs—such as miR-200—play a role in controlling tumor growth.
Using microarray technology developed by Mahal, the team examined cancer cells in an effort to see how they generated a carbohydrate signature. Specifically, they mapped how miRNA controls carbohydrate signatures.
In their analysis, the researchers, including scientists from the University of Texas at Austin, could see that miRNA molecules serve as major regulators of the cell’s surface-level carbohydrates—a discovery that showed, for the first time, that miRNA play a significant regulatory role in this part of the cell, also known as the glycome. Moreover, they could see which regulatory process was linked to specific carbohydrates.
“Carbohydrates aren’t just telling you the type of cell they came from, but also by which process they were created,” explains Mahal. “Our results showed that there are regulatory networks of miRNAs and that they are associated with specific carbohydrate codes.”
The National Institutes of Health supported this study.
Source: New York University
Scientists believe they have pinpointed the exact compounds in strawberries that give the fruit its unique flavor, a finding that could help breeders create more flavorful varieties even faster.
Eventually, those naturally occurring compounds could be used to make processed foods taste better with far less sugar and no artificial sweeteners. And if fruits and vegetables taste better, people will be more likely to eat them, researchers say.Related Articles On Futurity
- University of ChicagoFor crops, water shortage could double effect of climate change
- Protein 'makeover' key to muscular dystrophy
- Yale UniversityCompulsive eaters think like addicts
After looking at 35 strawberry varieties over two growing seasons, conducting extensive biochemical testing, and hosting consumer taste panels, the researchers identified 30 compounds directly tied to strawberry flavors that consumers prefer.
They also identified six volatile compounds that add to humans’ perception of sweetness in the fruit—independent of any type of sugar. Those six volatiles add to the growing portfolio of sugar-independent, flavor-enhancing compounds found in fruits, vegetables, and herbs that researchers are zeroing in on.
Similar findings are expected in the next few years for other crops, including blueberries, peaches, and various herbs, says Thomas Colquhoun, assistant professor in environmental horticulture at the University of Florida.‘Flavor puzzle’
“You can envision that every time we’re looking at a crop, we’re getting a new, exciting chance to be able to add one little piece to that flavor puzzle.”
The six sugar-independent volatiles are some of the group’s top “targets of interest,” says Michael Schwieterman, a postdoctoral researcher and the paper’s lead author.
While the ability to corral those volatiles and use them to make more flavorful, less-sugar-enhanced foods may be in the future, traditional plant breeders are using the findings to create consumer-preferred flavors now.
Published in PLOS ONE, the latest study also looked at seasonality—or changes in the berries’ chemical makeup depending on where it is in the growing season—so plant breeders can now use the information to select plants that keep premium flavor throughout the season, Schwieterman says.
“So when we find these specific volatiles, it will help us produce cultivars that we know have a good chemical profile and should be perceived as much sweeter, with better flavor.”
The US Department of Agriculture supported the research.
Source: University of Florida
Crops genetically modified with the bacterium Bt (Bacillus thuringiensis) produce proteins that kill pest insects. But steady exposure may allow pests to develop resistance to the proteins, making Bt plants ineffective.
New research shows that the combination of natural enemies, such as ladybird beetles, with Bt crops delays the pests’ ability to evolve resistance to these insecticidal proteins.Related Articles On Futurity
- University of LeedsDiverse rainforests resilient to drought
- Cardiff UniversityWildlife returns to Britain's urban rivers
- Michigan State UniversityHigh Plains aquifer is running dry for farmers
“This is the first demonstrated example of a predator being able to delay the evolution of resistance in an insect pest to a Bt crop,” says Anthony Shelton, professor of entomology at Cornell University.
Bt is a soil bacterium that produces proteins that are toxic to some species of caterpillars and beetles when they are ingested, but have been proven safe to humans and many natural enemies, including predaceous ladybirds.
Bt genes have been engineered into a variety of crops to control insect pests.
Since farmers began planting Bt crops in 1996 with 70 million hectares planted in the United States in 2012, there have been only three clear-cut cases in agriculture of resistance in caterpillars and one in a beetle. “Resistance to Bt crops is surprisingly uncommon,” Shelton says.
To delay or prevent insect pests from evolving resistance to Bt crops, the US Environmental Protection Agency promotes the use of multiple Bt genes in plants and the practice of growing refuges of non-Bt plants that serve as a reservoir for insects with Bt susceptible genes.
“Our paper argues there is another factor involved: the conservation of natural enemies of the pest species,” Shelton says. These predators can reduce the number of potentially resistant individuals in a pest population and delay evolution of resistance to Bt.Pest control
For the study, published in PLOS ONE, researchers set up large cages in a greenhouse. Each cage contained Bt broccoli and refuges of non-Bt broccoli. They studied populations of diamondback moth (Plutella xylostella) larvae, a pest of broccoli, and their natural enemies, ladybird beetles (Coleomegilla maculata), for six generations.
Cages contained different combinations of treatments with and without predators, and with and without sprayed insecticides on the non-Bt refuge plants. Farmers commonly spray insecticides on refuge plants to prevent loss by pests, but such sprays can kill predators and prey indiscriminately.
The results showed that diamondback moth populations were reduced in the treatment containing ladybird beetles and unsprayed, non-Bt refuge plants. Also, resistance to Bt plants evolved significantly slower in this treatment.
In contrast, Bt plants with no refuge were completely defoliated in treatments without ladybirds after only four to five generations, showing rapid development of resistance in the pests. In the treatment with sprayed non-Bt refuge plants and predators, diamondback moth populations were reduced, but the larvae more quickly evolved resistance to the Bt plants.
“These results demonstrate the effectiveness of Bt plants in controlling the pest population, the lack of effect of Bt on the predators and the role predators play in delaying resistance to Bt plants in the pest population,” Shelton says.
Researchers at the University of Melbourne contributed to the study, which was funded by the US Department of Agriculture and the Special Research Projects for Developing Transgenic Plants in China.
Source: Cornell University
The post ‘Natural enemies’ help modified crops control pests appeared first on Futurity.
Though present in more than 6,000 living species of fish, the adipose fin, a small appendage that lies between the dorsal fin and tail, has no clear function and is thought to be vestigial.
However, a new study analyzing their origins finds that these fins arose repeatedly and independently in multiple species. In addition, adipose fins appear to have repeatedly and independently evolved a skeleton, offering a glimpse into how new tissue types and structural complexity evolve in vertebrate appendages.Related Articles On Futurity
- RutgersWhat are you laughing at?
- University of California, DavisSunlight-oil mix is deadly for fish embryos
- Cornell UniversityGlider fleet to track fish in real time
Adipose fins, therefore, represent a prime example of convergent evolution and a new model for exploring the evolution of vertebrate limbs and appendages, report scientists in the Proceedings of the Royal Society B.
“Vertebrates in general have conserved body plans, and new appendages, whether fins or limbs, evolve rarely,” says senior author Michael Coates, chair of the Committee on Evolutionary Biology at the University of Chicago. “Here, we have a natural experiment re-run repeatedly, providing a superb new system in which to explore novelty and change.”
Usually small and structurally simple, adipose fins tend to get attention only when they are clipped from farm-raised trout and salmon as a tag. Despite their presence in thousands of fish species, they have been dismissed as a remnant of a once-functional fin. This assumption puzzled Coates and his co-authors, as they saw no evidence of deterioration in adipose fin structure or function in the fossil record.Convergent evolution
To study the evolutionary origins of this fin, Coates and lead author Thomas Stewart, graduate student in organismal biology and anatomy, turned to a technique known as ancestral-state reconstruction. With co-author W. Leo Smith, from the Biodiversity Institute at the University of Kansas, they created an evolutionary tree describing the relationships between fish with and without adipose fins, using genetic information from more than 200 ray-finned fish and fossil data from known time points. They then used statistical models to predict when and in what species the adipose fin might have first evolved.
They found that adipose fins originated multiple times, independently, in catfish and other groups of ray-finned fishes—a striking example of convergent evolution over a vast range of species.
“It’s pretty incredible that a structure which is incredibly common could be so misunderstood,” Stewart says. “Our finding, that adipose fins have evolved repeatedly, shows that this structure, long assumed to be more-or-less useless, might be very important to some fishes. It’s exciting because it opens up new questions.”Spines, plates, fins, and discs
More than 600 species of fish were studied in the course of this research, including many from the collections of The Field Museum in Chicago. This analysis revealed that a number of complex skeletal structures, including spines, plates, fin rays and cartilage discs, evolved independently in the adipose fins of different species. And while studies of the fossil record have suggested that new fins originate in a predictable and repeated manner, adipose fins demonstrate multiple routes to building new appendages.
“These results challenge what was generally thought for how new fins and limbs evolve, and shed new light on ways to explore the full range of vertebrate limb and fin diversity,” Stewart notes.
The National Science Foundation and the University of Chicago Division of Biological Sciences supported the study.
Source: University of Chicago