The microbial communities living on the surface of grapes may shape a wine’s terroir—the unique blend of vineyard soil and climate of every winegrowing region.
Results from DNA sequencing reveal patterns in the fungal and bacterial communities on grapes and these patterns in turn are influenced by vineyard environmental conditions. The findings appear in the Proceedings of the National Academy of Sciences.Related Articles On Futurity
- University of LeedsTea, wine extracts disable Alzheimer’s ‘clumps’
- Brandeis UniversityAncient jars held 2,000 liters of strong, sweet wine
- Washington University in St. LouisHealthy women may want to skip resveratrol
“The study results represent a real paradigm shift in our understanding of grape and wine production, as well as other food and agricultural systems in which microbial communities impact the qualities of the fresh or processed products,” says Professor David Mills, a microbiologist at the University of California, Davis.
He says further studies are needed to determine whether these variations in the microbial communities eventually produce detectable differences in the flavor, aroma, and other chemically linked sensory properties of wines.
By gaining a better understanding of microbial terroir, growers and vintners may be able to better plan how to manage their vineyards and customize wine production to achieve optimal wine quality, the study authors say.Chardonnay vs. Cabernet Sauvignon
To examine the microbial terroir, the researchers collected 273 samples of grape “must”—the pulpy mixture of juice, skins, and seeds from freshly crushed, de-stemmed wine grapes.
The must samples were collected right after crushing and mixing from wineries throughout California’s wine-grape growing regions during two separate vintages. Each sample, containing grapes from a specific vineyard block, was immediately frozen for analysis.
The researchers used a DNA sequencing technique called short-amplicon sequencing to characterize the fungal and bacterial communities growing on the surface of the grapes and subsequently appearing in the grape must samples.
They found that the structure of the microbial communities varied widely across different grape growing regions. The data also indicated that there were significant regional patterns of both fungal and bacterial communities represented in Chardonnay must samples.
However, the Cabernet Sauvignon samples exhibited strong regional patterns for fungal communities but only weak patterns for bacterial communities.
Further tests showed that the bacterial and fungal patterns followed a geographical axis running north-south and roughly parallel to the California coastline, suggesting that microbial patterns are influenced by environmental factors.
Taken together, these and other results from the study reveal patterns of regional distributions of the microbial communities across large geographical scales, the study co-authors report.
They note that it appears growing regions can be distinguished based on the abundance of several key groups of fungi and bacteria, and that these regional features have obvious consequences for both grapevine management and wine quality.
The American Wine Society Educational Foundation Endowment Fund, the American Society of Brewing Chemists Foundation, and the Wine Spectator supported the project.
Source: UC Davis
They may be slow swimmers, but seahorses are amazingly fast when it comes to snatching prey.
“A seahorse is one the slowest swimming fish that we know of, but it’s able to capture prey that swim at incredible speeds for their size,” says Brad Gemmell, research associate at the University of Texas Marine Science Institute.
The prey, in this case, are copepods. Copepods are extremely small crustaceans that are a critical component of the marine food web. They are a favored meal of seahorses, pipefish and sea dragons, all of which are uniquely shaped fish in the syngnathid family.
Copepods escape predators when they detect waves produced in advance of an attack, and they can jolt away at speeds of more than 500 body lengths per second. That equates to a 6-foot person swimming under water at 2,000 mph.
“Seahorses have the capability to overcome the sensory abilities of one of the most talented escape artists in the aquatic world—copepods,” says Gemmell. “People often don’t think of seahorses as amazing predators, but they really are.”
In calm conditions, seahorses are the best at capturing prey of any fish tested. They catch their intended prey 90 percent of the time. “That’s extremely high,” says Gemmell, “and we wanted to know why.”
For their study, Gemmell and his colleague Ed Buskey, professor of marine science, turned to the dwarf seahorse, Hippocampus zosterae, which is native to the Bahamas and the United States.Related Articles On Futurity
- Stony Brook UniversityDinosaurs had 'flight-ready' bird brains
- University of ArizonaWhy are insect and human brains so similar?
- Monash UniversityCreature bested dinos with top chompers
To observe the seahorses and the copepods in action, they used high-speed digital 3D holography techniques developed by mechanical engineer Jian Sheng at Texas Tech University. The technique uses a microscope outfitted with a laser and a high-speed digital camera to catch the rapid movements of microscopic animals moving in and out of focus in a 3D volume of liquid.
The holography technique revealed that the seahorse’s head is shaped to minimize the disturbance of water in front of its mouth before it strikes. Just above and in front of the seahorse’s nostrils is a kind of “no wake zone,” and the seahorse angles its head precisely in relation to its prey so that no fluid disturbance reaches it.
Other small fish with blunter heads, such as the three-spined stickleback, have no such advantage.
Gemmell says that the unique head shape of seahorses and their kin likely evolved partly in response to pressures to catch their prey. Individuals that could get very close to prey without generating an escape response would be more successful in the long term.
“It’s like an arms race between predator and prey, and the seahorse has developed a good method for getting close enough so that their striking distance is very short,” he explains.Suction up prey
Seahorses feed by a method known as pivot feeding. They rapidly rotate their heads upward and draw the prey in with suction. The suction only works at short distances; the effective strike range for seahorses is about 1 millimeter. And a strike happens in less than 1 millisecond.
Copepods can respond to predator movements in 2 to 3 milliseconds—faster than almost anything known, but not fast enough to escape the strike of the seahorse.
Once a copepod is within range of a seahorse, which is effectively cloaked by its head shape, the copepod has no chance.
Gemmell says that being able to unravel these interactions between small fish and tiny copepods is important because of the role that copepods play in larger ecosystem food webs. They are a major source of energy and anchor of the marine food web, and what affects copepods eventually affects humans, which are sitting near the top of the web, eating the larger fish that also depend on copepods.
The team published their research in Nature Communications.
Source: University of Texas at Austin
Research has suggested that a particular gene in the brain’s reward system contributes to overeating and obesity in adults. The new study links this same variant to childhood obesity and tasty food choices, particularly for girls.
Contrary to “blaming” obese individuals for making poor food choices, Professor Michael Meaney of McGill University and his team suggest that obesity lies at the interface of three factors: genetic predispositions, environmental stress, and emotional well-being.Related Articles On Futurity
- Penn StateStress hormone raises obesity risk in girls
- Cornell UniversityGut microbes mix with toxins to make us fat?
- Sleepy and snoring: Serious red flags
These findings, published in the journal Appetite shed light on why some children may be predisposed to obesity and could mark a critical step towards prevention and treatment.
“In broad terms, we are finding that obesity is a product of genetics, early development, and circumstance”, says Meaney, who is also associate director of the Douglas Mental Health University Institute Research Centre.
The work is part of the MAVAN (Maternal Adversity Vulnerability & Neurodevelopment) project, headed by Meaney and Hélène Gaudreau, project coordinator. Their team studied pregnant women, some of whom suffered from depression or lived in poverty, and followed their children from birth until the age of ten.
For the study, researchers tested 150 four-year-old MAVAN children by administering a snack test meal. The children were faced with healthy and non-healthy food choices. Mothers also completed a questionnaire to address their child’s normal food consumption and preferences.
“We found that a variation in a gene that regulates the activity of dopamine, a major neurotransmitter that regulates the individual’s response to tasty food, predicted the amount of ‘comfort’ foods—highly palatable foods such as ice cream, candy, or calorie-laden snacks—selected and eaten by the children,” says Patricia Silveira of McGill University.
“This effect was especially important for girls who we found carried the genetic allele that decreases dopamine function.”
“Most importantly, the amount of comfort food eaten during the snack test in the four-year-olds predicted the body weight of the girls at six years of age,” says Meaney.
“Our research indicates that genetics and emotional well-being combine to drive consumption of foods that promote obesity. The next step is to identify vulnerable children, as there may be ways for prevention and counseling in early obesity stages.”
Robert Levitan of the University of Toronto is also a co-author of the study.
Source: McGill University
Because of the high it produces, the prescription painkiller oxycodone is the most popular drug of choice among opioid drug abusers in rehab.
Hydrocodone, also prescribed to treat pain, is next in line. In all, some 75 percent of those surveyed rated one of these drugs as their favorite.Related Articles On Futurity
- University of North Carolina at Chapel HillHow anxiety and reward interact in the brain
- Cornell UniversityReading chemical memories of past drug use
- Are some teens wired to binge drink?
A nationwide survey questioned more than 3,500 people in 160 drug-treatment programs across the United States, asking which drugs they abuse and why. Oxycodone was favored by 45 percent, and hydrocodone was preferred by about 30 percent.
Although the drugs are meant to be taken orally, almost 64 percent of oxycodone abusers and just over one-quarter of hydrocodone abusers crush the tablets and inhale the drug, while one in five oxycodone abusers reported that they sometimes dissolve the drug in water and inject it. Less than 5 percent reported taking hydrocodone intravenously.
Personality, age, and gender all played a role in drug preferences, according to the study published in the journal PAIN. Oxycodone is attractive to those who enjoy taking risks and prefer to inject or snort drugs to get high. Young, male drug users tend to fit that profile.
In contrast, hydrocodone is the more popular choice among women, older people, people who don’t want to inject drugs, and those who prefer to deal with a doctor or friend rather than a drug dealer.When patients fake pain
“Opioids are prescribed to treat pain, but their misuse has risen dramatically in recent years,” says principal investigator Theodore J. Cicero, professor of neuropharmacology at Washington University in St. Louis. “Our goal is to understand the personal characteristics of people who are susceptible to drug abuse, so we can detect problems ahead of time.”
For example, Cicero’s team wants to find better ways to identify people who visit doctors and fake pain, as well as those who are in pain but at high risk of becoming dependent on pain-killing drugs.
Oxycodone is commonly sold under brand names such as OxyContin and Percocet. Hydrocodone is the chemical name for the opioid in the drug sold as Vicodin, among other brand names.
Among those surveyed, 54 percent said the quality of the high was considered much better for oxycodone, compared with 20 percent who preferred the high they got from hydrocodone.Pure form
“Among the reasons addicts prefer oxycodone is that they can get it in pure form,” Cicero says. “Until recently, all drugs with hydrocodone as their active ingredient also contained another product such as acetaminophen, the pain reliever in Tylenol. That turns out to be very important because addicts don’t like acetaminophen.”
Acetaminophen causes considerable irritation when it’s injected, and when taken orally in large amounts, it can cause severe liver damage, he says.
“Interestingly, addicts, while they’re harming their health in one respect by taking these drugs, report being very concerned about the potentially negative side effects of acetaminophen,” Cicero says.
Those side effects, combined with a preference for the high provided by oxycodone, have led drug abusers to seek out that drug, either on the street or by visiting physicians and attempting to convince doctors that they have pain severe enough to warrant a prescription pain killer.Graduating to heroin
Cicero says he’s concerned with the US Food and Drug Administration’s (FDA) recent approval of a new, pure form of hydrocodone without acetaminophen, a formulation he expects will be attractive to abusers.
Investigators conducted a pair of anonymous surveys and longer, follow-up interviews with 200 patients willing to give up their anonymity to answer personal questions about drug use.
Even among people in treatment for drug dependence, there seems to be little appetite for moving to stronger prescription narcotics such as fentanyl or various derivatives of morphine.
“Addicts will crush OxyContin pills and inject or snort them to get high, but they don’t seem to want to take more potent prescription drugs,” Cicero says.
“Those drugs—such as hydromorphone, fentanyl, and dilaudid—have a pretty small safety margin. When you look at the dose to produce euphoria versus the amount required for overdose, it’s a pretty small difference. Even serious drug abusers said they try to avoid those drugs.”
But, previous research shows some abusers are moving from abusing prescription drugs to “street” drugs. Since the introduction in 2010 of a formulation of OxyContin that is harder to snort or inject, large numbers of oxycodone users have reported switching to heroin.
“It’s a huge issue, and it’s a difficult one to deal with,” Cicero says. “Heroin actually has become a cheaper alternative to prescription drugs, and that’s a frightening development because you’ve now got people who never would have considered using heroin, but they’re making the transition.”
Not all drug abusers in treatment got there through thrill-seeking or because they were looking for a great high, the survey shows. Many sought out pain-killing drugs because they were in pain.
“We found that about 50 percent of people, even those who have ‘graduated’ to drugs like heroin, indicated that they started taking these drugs because they had difficulty controlling pain,” Cicero says. “That’s very different from some users who told us they just wanted to get high.”
Spontaneous bursts of light from a solid block illuminate the unusual way interacting quantum particles behave when they are driven far from equilibrium.
The discovery of a way to trigger these flashes may lead to new telecommunications equipment and other devices that transmit signals at picosecond speeds.
The Rice University lab of Junichiro Kono found the flashes, which last trillionths of a second, change color as they pulse from within a solid-state block. The researchers say the phenomenon can be understood as a combination of two previously known many-body concepts: superfluorescence, as seen in atomic and molecular systems, and Fermi-edge singularities, a process known to occur in metals.Related Articles On Futurity
- Yale UniversityAnti-lasers: Latest zap! technology
- Georgia Institute of TechnologyDevice turns signatures into light signals
- University of QueenslandMelanin may power eco-friendly electronics
The team previously reported the first observation of superfluorescence in a solid-state system by strongly exciting semiconductor quantum wells in high magnetic fields.
The new process—Fermi-edge superfluorescence—does not require them to use powerful magnets. That opens up the possibility of making compact semiconductor devices to produce picosecond pulses of light.
The researchers report their findings online in Scientific Reports.
The semiconducting quantum wells at the center of the experiment contain particles—in this case, a dense collection of electrons and holes—and confine them to wiggle only within the two dimensions allowed by the tiny, stacked wells, where they are subject to strong Coulomb interactions.
Previous experiments showed the ability to create superfluorescent bursts from a stack of quantum wells excited by a laser in extreme cold and under the influence of a strong magnetic field, both of which further quenched the electrons’ motions and made an atom-like system. The basic features were essentially the same as those known for superfluorescence in atomic systems.More mysteries
That was a first, but mysteries remained, especially in results obtained at low or zero magnetic fields. Kono says the team didn’t understand at the time why the wavelength of the burst changed over its 100-picosecond span. Now they do.
In the new results, the researchers not only described the mechanism by which the light’s wavelength evolves during the event (as a Fermi-edge singularity), but also managed to record it without having to travel to the National High Magnetic Field Laboratory at Florida State.
Kono says superfluorescence is a well-known many-body, or cooperative, phenomenon in atomic physics. Many-body theory gives physicists a way to understand how large numbers of interacting particles like molecules, atoms, and electrons behave collectively.
Superfluorescence is one example of how atoms under tight controls collaborate when triggered by an external source of energy. However, electrons and holes in semiconductors are charged particles, so they interact more strongly than atoms or molecules do.
The quantum well, as before, consisted of stacked blocks of an indium gallium arsenide compound separated by barriers of gallium arsenide. “It’s a unique, solid-state environment where many-body effects completely dominate the dynamics of the system,” Kono says.
“When a strong magnetic field is applied, electrons and holes are fully quantized—that is, constrained in their range of motion—just like electrons in atoms,” he says.
“So the essential physics in the presence of a high magnetic field is quite similar to that in atomic gases. But as we decrease and eventually eliminate the magnetic field, we’re entering a regime atomic physics cannot access, where continua of electronic states, or bands, exist.”
The Kono team’s goal was to keep the particles as dense as possible at liquid helium temperatures (about -450 degrees Fahrenheit) so that their quantum states were obvious, or “quantum degenerate,” which happens when the so-called Fermi energy is much larger than the thermal energy.
When pumped by a strong laser, these quantum degenerate particles gathered energy and released it as light at the Fermi edge: the energy level of the most energetic particles in the system. As the electrons and holes combined to release photons, the edge shifted to lower-energy particles and triggered more reactions until the sequence played out.
The researchers found the emitted light shifted toward the higher red wavelengths as the burst progressed.No magnets required
“What’s cool about this is that we have a material, we excite it with a 150-femtosecond pulse, wait for 100 picoseconds, and all of a sudden a picosecond pulse comes out. It’s a long delay,” Kono says.
“This may lead to a new method for producing picosecond pulses from a solid. We saw something essentially the same previously, but it required high magnetic fields, so there was no practical application. But now the present work demonstrates that we don’t need a magnet.”
The team included co-lead authors Timothy Noe, a Rice postdoctoral researcher, and Ji-Hee Kim, a former Rice postdoctoral researcher and now a research professor at Sungkyunkwan University in the Republic of Korea. Co-authors contributed from Florida State University and Texas A&M University.
The National Science Foundation and the state of Florida supported the research.
Source: Rice University
To learn more about how the brain can process multiple odors all at once, scientists trained locusts to respond to a specific smell.
Locusts have a relatively simple sensory system, which is ideal for studying brain activity.
Barani Raman, of the School of Engineering & Applied Science at Washington University in St. Louis, found that odors prompted neural activity in the brain that allowed the locust to correctly identify the stimulus, even with other odors present.How to train a locust
The team used a computer-controlled pneumatic pump to administer an odor puff to the locust, which has olfactory receptor neurons in its antennae, similar to sensory neurons in our nose.Related Articles On Futurity
- University of California, Santa BarbaraNative forests ravaged by bug imports
- Georgia Institute of TechnologyHow mosquitoes fly on despite the rain
- Yale UniversityButterflies see it’s hot to be cool
A few seconds after the odor puff is given, the locust is given a piece of grass as a reward, as a form of Pavlovian conditioning. As with Pavlov’s dog, which salivated when it heard a bell ring, trained locusts anticipated the reward when the odor used for training is delivered.
Instead of salivating, they opened their palps, or finger-like projections close to the mouthparts, when they predicted the reward. Their response was less than half of a second.
The locusts could recognize the trained odors even when another odor meant to distract them was introduced prior to the target cue.
“We were expecting this result, but the speed with which it was done was surprising,” says Raman, assistant professor of biomedical engineering. “It took only a few hundred milliseconds for the locust’s brain to begin tracking a novel odor introduced in its surrounding. The locusts are processing chemical cues in an extremely rapid fashion.”
“There were some interesting cues in the odors we chose,” Raman says. “Geraniol, which smells like rose to us, was an attractant to the locusts, but citral, which smells like lemon to us, is a repellant to them. This helped us identify principles that are common to the odor processing.
Raman has spent a decade learning how the human brain and olfactory system operate to process scent and odor signals. His research could lead to a device for noninvasive chemical sensing that takes inspiration from the biological olfactory system. Such a device could be used in homeland security applications to detect volatile chemicals and in medical diagnostics to test blood-alcohol level.
This study is the first in a series focused on the principles of olfactory computation, Raman says.
“There is a precursory cue that could tell the brain there is a predator in the environment, and it has to predict what will happen next,” Raman says. “We want to determine what kinds of computations have to be done to make those predictions.”
The results were published in Nature Neuroscience.
A study with mice suggests that a high-fat diet during puberty could speed up the development of breast cancer.
The findings, published online in the journal Breast Cancer Research, indicate that before any tumors appear, there are changes in the breast that include increased cell growth and alterations in immune cells. These changes persist into adulthood and can lead to the rapid development of precancerous lesions and ultimately breast cancer.Related Articles On Futurity
- University of WarwickChronic pain predicts breast surgery discomfort
- University of RochesterCompound clears cancer drug's 'mental fog'
- University of California, Santa BarbaraDrug from evergreen tree kills cancer
A high-fat diet also produces a distinct gene signature in the tumors consistent with a subset of breast cancers known as basal-like that can carry a worse prognosis.
“This is very significant because even though the cancers arise from random mutations, the gene signature indicating a basal-like breast cancer shows the overarching and potent influence this type of diet has in the breast,” says Sandra Haslam, professor of physiology at Michigan State University.
“Cancers of this type are more aggressive in nature and typically occur in younger women. This highlights the significance of our work toward efforts against the disease.”It’s the fat, not the weight gain
“It’s important to note that since our experimental model did not involve any weight gain from the high-fat diet, these findings are relevant to a much broader segment of the population than just those who are overweight,” says Richard Schwartz, microbiology professor. “This shows the culprit is the fat itself rather than weight gain.”
Early evidence indicates that the fat, which in this case was saturated animal fat, could potentially have permanent effects even if a low-fat diet is introduced later in life.
The preliminary finding requires further investigation, Schwartz cautions, and doesn’t indicate with certainty that humans will be affected in the same way.
“Overall, our current research indicates that avoiding excessive dietary fat of this type may help lower one’s risk of breast cancer down the road,” he says. “And since there isn’t any evidence suggesting that avoiding this type of diet is harmful, it just makes sense to do it.”
The research is funded by the National Institute of Environmental Health Sciences and the National Cancer Institute.
Source: Michigan State University
The post High-fat diet early may boost breast cancer risk later appeared first on Futurity.
By blocking a protective enzyme in the microscopic parasite C. parvum, scientists have made it vulnerable to its host’s immune system.
In the developing world, Cryptosporidium parvum has long been the scourge of freshwater. A decade ago, it announced its presence in the United States, infecting over 400,000 people—the largest waterborne-disease outbreak in the county’s history.
Its rapid ability to spread, combined with an incredible resilience to water decontamination techniques, such as chlorination, led the National Institutes of Health (NIH) to add C. parvum to its list of public bioterrorism agents.Related Articles On Futurity
- University of MelbourneYouth protects malaria parasite from drugs
- Penn StateThirsty cities should mix it up
- Duke UniversityWill lemurs survive parasites gone wild?
Currently, there are no reliable treatments for cryptosporidiosis, the disease caused by C. parvum, but that may be about to change with the identification of a target molecule. The findings of the study have been recently published in the journal Antimicrobial Agents and Chemotherapy.
“In the young, the elderly and immunocompromised people such as people infected with HIV/AIDS, C. parvum is a very dangerous pathogen. Cryptosporidiosis is potentially life-threatening and can result in diarrhea, malnutrition, dehydration, and weight loss,” says first author Momar Ndao, director of the National Reference Centre of Parasitology at the McGill University Health Centre, who is also an assistant professor in the departments of medicine, immunology and parasitology.
C. parvum is a microscopic parasite that lives in the intestinal tract of humans and many other mammals. It is transmitted through the fecal-oral contact with an infected person or animal, or from the ingestion of contaminated water or food. Since the parasite is resistant to chlorine and difficult to filter, cryptosporidiosis epidemics are hard to prevent.
A thick wall protects the oocysts of C. parvum, which are shed during the infectious stage, and allows them to survive for long periods outside the body until they spread to a new host.
“Most protozoan (single-celled) parasites like C. parvum use enzymes called proteases to escape the body’s immune defenses,” explains Ndao. “In this study, we were able to identify a protease inhibitor that can block the parasite’s ability to circumvent the immune system, and hide in intestinal cells called enterocytes, in order to multiply and destroy the intestinal flora.”
The discovery is the first time a molecular target has been found for the control of C. parvum.
“The next step will be to conduct human clinical trials to develop an effective treatment for this parasite, which affects millions of people around the world,” concludes Ndao.
Source: McGill University
The brains of infants who carry a gene associated with an increased risk for Alzheimer’s disease develop differently than babies who don’t have the gene.
While this discovery is neither diagnostic nor predictive of Alzheimer’s, it could be a step toward understanding how the gene variant APOE ε4, confers risk much later in life.Related Articles On Futurity
- California Institute of TechnologyFacing temptation, brain sends mixed signals
- University of MarylandPlacebo effect varies by personality type
- Michigan State UniversityInnovation gives some hyenas an edge
Researchers imaged the brains of 162 healthy babies between the ages of two months and 25 months. All of the infants had DNA tests to see which variant of the APOE gene they carried. Sixty of them had the ε4 variant that has been linked to an increased risk of Alzheimer’s.
Using a special MRI technique designed to study sleeping infants, they compared the brains of ε4 carriers with non-carriers. They found that children who carry the APOE ε4 gene tended to have increased brain growth in areas in the frontal lobe, and decreased growth in areas in several areas in the middle and rear of the brain. The decreased growth was found in areas that tend to be affected in elderly patients who have Alzheimer’s disease.
Researchers emphasized the findings, published in JAMA Neurology, do not mean that any of the children in the study are destined to develop Alzheimer’s or that the brain changes detected are the first clinical signs of the disease.
What the findings do suggest, however, is that brains of APOE ε4 carriers tend to develop differently from those of non-ε4 carriers beginning very early in life. It is possible that these early changes provide a “foothold” for the later pathologies that lead to Alzheimer’s symptoms. Information from this study may be an important step toward understanding how this gene confers risk for Alzheimer’s, something that is not currently well understood.
“This work is about understanding how this gene influences brain development,” says Sean Deoni, assistant professor of engineering who oversees the Advanced Baby Imaging Lab at Brown University. “These results do not establish a direct link to the changes seen in Alzheimer’s patients, but with more research they may tell us something about how the gene contributes to Alzheimer’s risk later in life.”Roles in blood and brain
The APOE ε4 variant linked to Alzheimer’s is present in about 25 percent of the US population. Not everyone who carries the gene gets Alzheimer’s, but 60 percent of people who develop the disease have at least one copy of the ε4 gene.
The gene is thought to have several different roles in the blood and brain, some of which remain to be clarified. For instance, it has been shown to participate in regulation of cholesterol, a molecule that is involved in the development of gray matter and white matter brain cells.
It has also been shown to participate in the regulation of amyloid, a brain protein that accumulates in Alzheimer’s and is now being targeted by investigational treatments. Studies are needed to clarify the ways in which APOE, human development, aging and other risk factors may conspire to produce the brain changes involved in Alzheimer’s disease.
The researchers used an MRI technique that quiets the MRI machine to a whisper, enabling the brains of healthy babies to be imaged while they sleep without medication. The technique also enables imaging of both gray matter—the part of the brain that contains neurons and nerve fibers—and white matter, which contains the fatty material that insulates the nerve fibers. Both gray and white matter are thought to have a role in Alzheimer’s. White matter growth begins shortly after birth and is an important measure of brain development.Babies develop normally
“We’re in a good spot to be able to investigate how this gene influences development in healthy infants. These infants are not medicated and not showing any cognitive decline—quite the opposite, actually; they’re developing normally.”
There is no reason to believe that the children won’t continue to develop normally, Deoni says. There is no consistent evidence to suggest that ε4 carriers suffer any cognitive problems or developmental delay. And the areas of increased growth raise the possibility that the gene might actually confer some advantages to infants early on.
Ultimately the researchers hope the findings could lead to new strategies for preventing a disease that currently affects more than 5.2 million people in the US alone.
“It may sound scary that we could detect these brain differences in infants,” says Eric Reiman, executive director of the Banner Alzheimer’s Institute in Arizona and another senior author on the paper.
“But it is our sincere hope that an understanding of the earliest brain changes involved in the predisposition to Alzheimer’s will help researchers find treatments to prevent the clinical onset of Alzheimer’s disease—and do so long before these children become senior citizens.”
Researchers from the Translational Genomics Research Institute and the University of Southern California also participated in this study, which was supported by the National Institute of Mental Health and the National Institute on Aging, both part of the National Institutes of Health, and the state of Arizona.
Source: Brown University
The post Brains grow differently in babies with Alzheimer’s gene appeared first on Futurity.
By linking antibodies to certain diseases, a new method could uncover and confirm environmental triggers for diseases such as celiac and autism.
“We have two goals,” says professor Patrick Daugherty, a researcher with the department of chemical engineering and the Center for BioEngineering at University of California, Santa Barbara. “We want to identify diagnostic tests for diseases where there are no blood diagnostics . . . and we want to figure out what might have given rise to these diseases.”Related Articles On Futurity
- Brown UniversityGenes tied to severe autism may play bigger role
- University of MissouriNeutrophils: A better way to diagnosis meningitis?
- Michigan State UniversityAutism more severe in kids born early or late
The process works by mining an individual’s immunological memory—a veritable catalog of the pathogens and antigens encountered by his or her immune system. The research is published in the Proceedings of the National Academy of the Sciences.
“Every time you encounter a pathogen, you mount an immune response,” explains Daugherty. The response comes in the form of antibodies that are specific to the antigens—molecular, microbial, chemical—your body is resisting, and the formation of “memory cells” that are activated by subsequent encounters with the antigen.
Responses can vary, from minor reactions—a cough, or a sneeze—to serious autoimmune diseases in which the body turns against its own tissues and its immune system responds by destroying them, such as in the case of Type 1 diabetes and celiac disease.
“The trick is to determine which antibodies are linked to specific diseases,” says Daugherty. Celiac disease sufferers, for example, will have certain antibodies in their blood that bind to specific peptides—short chains of amino acids—present in wheat, barley, and rye. These peptides are the gluten that is the root of allergies and sensitivities in some people. Like a lock and key, these antibodies—the locks—bind only to certain sequences of amino acids that comprise the peptides—the keys.
“People with celiac disease have two particular antibody types in their blood, which have proved to be enormously useful for diagnosis,” says Daugherty.Sifting through antibodies
However, sheer variety and number of antibodies present in a person’s blood at any given time has been a challenge for researchers trying to link specific illnesses with specific antibody molecules. One antigen can stimulate the production of many antibodies in response. What’s more, each individual’s antibodies to even the same antigen differ slightly in their form.
The idea of using molecular separation to find the disease antibodies has been around for over 20 years, says Daugherty, but no one had figured quite how to sift through the vast amount of molecules.
To sort through perhaps tens of thousands of antibody molecules present in a person’s blood, the research team—including John T. Ballew from the Biomolecular Science and Engineering graduate program, now a postdoctoral associate with the Koch Institute for Integrative Cancer Research at MIT—mixed a sample of a subject’s blood, which contains the antibody molecules, with a vast number of different peptides (about 10 billion).‘Lock and key’ response
“All the keys associate with their preferred lock,” says Daugherty. “The peptides that can bind to an antibody, do so.” The researchers then pull out the disease-bound pairs, in a process that progressively decreases the number of antibodies-peptide pairs that are most unique to a particular disease. Repeating the process with subsequent patients who may have the same symptoms, phenotypes, or genetic dispositions, continues to whittle down the size of the peptide pool.
Further in vitro evolution of the best draft peptides can identify the particular sequence of amino acid keys that fit into the antibody locks. This sequence can be used to confirm the antibodies in question as the biomarkers specifically associated with the disease.
“The diagnostic performance of the reagents generated with this approach is excellent,” says Daugherty. “We can discover biomarkers with as little as a drop of blood, and the peptides discovered can be adapted into preferred low cost testing platforms widely used in clinical practice.”
The amino acid sequence of the evolved peptides, when cross-referenced with a database of known proteins, can identify the antigens (that contain the same peptide sequence). This, in turn, can then yield clues into what factors in the patient’s environment may have contributed to the disease.
The process may be used to gain insight on diseases that are thought to have environmental triggers, including Type-1 diabetes, autism, schizophrenia/bipolar disorder, Crohn’s disease, Parkinson’s disease, and perhaps even Alzheimer’s disease.
In cases, such as Graves’ disease, where an antibody is identified as the cause (as opposed to simply an indicator) knowing the antibody’s structure can lead to more effective therapies. ”If you can get rid of the antibody, you can treat the disease,” says Daugherty. “By finding these keys, you can block the antibody.”
Source: UC Santa Barbara
The post ‘Lock and key’ antibody test could offer diagnosis appeared first on Futurity.
When searching for habitable zones where life-sustaining planets might exist, including when building Terrestrial Planet Finders, scientists would be better served by taking a conservative approach.
That means looking for planets that have liquid water and solid or liquid surfaces, not gas giants like Jupiter or Saturn, researchers say.Related Articles On Futurity
- Cardiff UniversitySmall stars born while parent sleeps
- California Institute of TechnologyMars: What to expect on landing night
- Johns Hopkins UniversityHear a heartbeat in space with this stethoscope
The habitable zone in a solar system is the area where liquid water, and by extension life, could exist.
Defining the habitable zone is key to the search for life sustaining planets in part because the idea of a habitable zone is used in designing the space-based telescopes that scientists would use to find planets where metabolism—and potentially life—might exist.
“It’s one of the biggest and oldest questions that science has tried to investigate: is there life off the earth?” says James Kasting, professor of geosciences at Penn State.
“NASA is pursuing the search for life elsewhere in the solar system, but some of us think that looking for life on planets around other stars may actually be the best way to answer this question.”
Recent research by Ravi Kopparapu, a post-doctoral researcher working with Kasting, suggests that the frequency of Earth-like planets in the habitable zones of stars known as M-dwarfs is 0.4 to 0.5. To find four potential Earth-like candidates, scientists would need to survey the habitable zones of about 10 cool stars.
This data came from NASA’s Kepler Space Telescope, which collected information on transiting exoplanets for almost four years before being partially disabled. Previous estimates put this frequency at 0.1, which would have forced scientists using planet finders to survey more stars, searching farther away from our solar system.Planet’s surface water
An even more recent estimate of the frequency of Earth-like planets was announced by Eric Petigura and colleagues at the Kepler Science Conference in early November. They calculated the figure at 0.22 around stars more similar to the Sun.
But Kopparapu and Kasting think this estimate could be too high by a factor of two because they used an overly optimistic estimate for the width of the habitable zone. If so, then the old value of 0.1 may be closer to the truth.
The ability of a planet to sustain liquid water is traditionally part of the criteria when searching for life-sustaining planets. While some have argued that subsurface water would be enough to sustain life, testing that hypothesis remotely would be virtually impossible, so the focus for astronomers should remain on surface water, the researchers note in a special issue of Proceedings of the National Academy of Sciences.
“All life that we know of is carbon-based and depends on the presence of liquid water during at least part of its life cycle,” Kasting writes in the paper. “Hence, if we see a planet that shows evidence for liquid water, we can immediately think about the possible presence of carbon-based life.”
While no federal funding to build a Terrestrial Planet Finder is currently in place, the amount of research related to exoplanets is strengthening. A TPF would allow for the detection of gases—or lack thereof—in planets’ atmospheres. If, for example, no signs of life are found after searching the habitable zones of 30 stars, that could be a reason for pessimism.
And, while it may be more appealing to know that there is evidence of life on other planets, learning that there is not would have scientific implications.Why not on other planets?
“Maybe every planet out there that has the right conditions that develop life,” Kasting says. “We don’t really know the answer to that. But, it could be. If you’re an optimist, you think it just takes the right conditions. It happened on Earth, why wouldn’t it happen somewhere else?”
It is possible that initial observations of Earth-like exoplanets could give an ambiguous answer, Kasting says. For example, oxygen might be found, but not methane. But even that could open the door to further exploration.
While the pursuit of life in the outer reaches of the sky might seem far-fetched at first glance, astronomers have talked about it as a second Copernican revolution.
“Did it make any difference when we figured out that the Earth was going around the sun rather than vice versa? If you’re just a practical-minded person, it made absolutely no difference to your life because life goes on Earth just the way it did,” Kasting says.
“But if you expand your mind a little bit, it helped us figure out our place in the universe—that we’re actually on a little planet going around a rather normal star amongst many other stars in the galaxy, and there are many galaxies out there.
It’s been one of the most profound changes ever in human thought. We think of TPF as the next step in the Copernican revolution, to figure out if there are other Earths out there and if there is life on those planets.”
Source: Penn State
The post Search for habitable planets should veer conservative appeared first on Futurity.
Black Friday is a busy time for big-box retailers, as well as the warehouse workers that move their products. But despite claims that the industry provides middle class wages for blue-collar jobs, a recent study finds that warehouse workers in the Inland Empire area of California average $23,000 per year for men and $19,000 per year for women.Related Articles On Futurity
- Stanford UniversityTons of carbon emitted as palm oil demand grows
- Duke UniversityRelationships skew how consumers judge brands
- University of California, DavisCan telecommuting put the brakes on a career?
Contrast that with $45,000 per year—the figure cited by an industry model developed by the Southern Californian Association of Governments, says Juan De Lara, author of the study and an assistant professor of American Studies and Ethnicity at the University of Southern California.
The figures raise the question of what this means for the economies of regions like Southern California, where logistics plays a vital role and has been touted by business and government leaders as a path towards economic advancement.
President Barack Obama raised the question earlier this year when he visited an Amazon.com shipping facility in Chattanooga, Tennessee, as part of a tour entitled “A Better Bargain for the Middle Class.” It took De Lara by surprise.
“When the president gets up there and says that this is an economic model that will allow us to move forward as a country, I immediately thought, ‘Right, except there’s a huge section of the workforce that doesn’t actually make the money touted in these industry-wide wages,’” he says.Third-party employers
He says the discrepancy can largely be attributed to the fact that many warehouse workers aren’t actually employed by major retailers such as Amazon, Wal-Mart, and Target.
Workers directly employed by them—which also include large numbers of white-collar workers—might earn an average of $45,000 per year. But most warehouse workers are actually employed by third-party logistics companies.
Many of them have been fined for labor law violations that appear to be systemic problems in the industry, De Lara says.Southern California workers
His study highlights the critical role that logistics has played in the Southern California economy. The industry employed more than 500,000 people in Los Angeles, Riverside, and San Bernardino counties in 2012, and has long been seen by policymakers as a way to create blue collar jobs in the aftermath of post-1980s manufacturing declines.
Private and public investments in port infrastructure led to a booming warehouse industry. About half of warehouse space needed to meet future port capacity in Los Angeles and Long Beach was expected to be built in inland counties.
Past warehouse and residential construction has made the Inland Empire into one of the fastest growing metropolitan regions in the country during the past 30 years, De Lara says.
Latinos moving into the region has largely driven this growth. Nearly 80 percent of Inland Empire newcomers were Latinos. What does the future hold for this demographic group, California’s largest?
“Most Latinos in the region work in blue occupations—more than 50 percent of Inland Southern California’s adult Latino population doesn’t have a college degree,” De Lara says. “So what’s the economic future for them in a region where warehouses and retail stores—both sectors with relatively low wages—are major employers?”Wal-Mart’s power
While policymakers certainly have a role to play, the real decision-makers may be the corporate retailers driving the industry. Wal-Mart in particular has incredible power when it comes to determining this economic structure, De Lara says.
Pressure put on them by advocacy groups and the media has changed its behavior in the past. Wal-Mart has shifted to purchasing more American-made products and locally grown food due to increased media attention, as well as increasing its energy efficiency.
When $2.8 million in labor law fines were imposed on logistics contractors this year, Wal-Mart moved to distance itself from those companies.
It’s in its interest to do so, De Lara explains. “When Wal-Mart makes a decision to make these types of changes, it has a tremendous effect, not only across the industry but across the economy,” De Lara says. The question remains: “Will it turn a blind eye or take a leadership position and really have a positive effect on workers at large?”
The post Despite Black Friday boom, logistics workers get sold short appeared first on Futurity.
A small study that looks beyond the brain suggests that vascular changes in the neck may play a role in the development of Alzheimer’s disease.
Studies on Alzheimer’s disease and other forms of dementia have long focused on what’s happening inside the brain. The new findings on an abnormality outside the brain have potential implications for a better understanding of Alzheimer’s and other neurological disorders associated with aging.Related Articles On Futurity
- University of Texas at AustinMutant worms may tag Parkinson's drugs
- University of California, DavisHigh blood pressure can cripple memory
- Johns Hopkins UniversityAlzheimer's plaque coats sleepless brains
For the study, published in the Journal of Alzheimer’s Disease, researchers studied a hemodynamic abnormality in the internal jugular veins called jugular venous reflux or JVR that occurs when the pressure gradient reverses the direction of blood flow in the veins, causing blood to leak backwards into the brain.
JVR occurs in certain physiological situations, if the internal jugular vein valves do not open and close properly, which occurs more frequently in the elderly. This reverse flow is also believed to impair cerebral venous drainage.
The brain’s white matter is made of myelin and axons that enable communication between nerve cells.More brain lesions
“We were especially interested to find an association between JVR and white matter changes in the brains of patients with Alzheimer’s disease and those with mild cognitive impairment,” says senior author Robert Zivadinov, professor of neurology at the School of Medicine and Biomedical Sciences at the University at Buffalo.
“Age-related white matter changes have long been associated with dementia and faster cognitive decline,” he says. “To the best of our knowledge, our study is the first to show that JVR is associated with a higher frequency of white matter changes, which occur in patients with mild cognitive impairment and Alzheimer’s disease.”
“We are the first to observe that JVR may be associated with formation of these lesions in the brain, given the fact that Alzheimer’s patients have more white matter lesions than healthy people,” says Ching-Ping Chung, the first author on the study and assistant professor of neurology at National Yang-Ming University.
“If this observation is validated in larger studies, it could be significant for the development of new diagnostic tools and treatments for pathological white matter lesions developed in Alzheimer’s disease and other forms of dementia.”‘Dirty’ white matter
White matter changes have been found to have a direct relationship to the buildup of amyloid plaque long seen as central to the development of Alzheimer’s disease.
“The accumulation of amyloid plaque may result from the inability of cerebrospinal fluid to be properly cleared from the brain,” says Clive Beggs, second author on the study and professor of medical engineering at the University of Bradford.
In addition, JVR appears to be associated with dirty-appearing white matter, which is thought to represent early stage lesion formation.
“To the best of our knowledge, this is one of the first studies to explore the impact of dirty-appearing white matter in the elderly,” Beggs says. The significance of dirty-appearing white matter in the elderly needs more study.Brain to neck
The authors caution that the study is small and that the results must be validated in larger, future studies. The research involved 12 patients with Alzheimer’s disease, 24 with mild cognitive impairment, and 17 age-matched elderly controls. Participants underwent Doppler ultrasound exams and magnetic resonance imaging scans.
The impact of hemodynamic changes in veins from the brain to the neck has been the focus of numerous studies.
“Given the major finding of our group in 2011 that both healthy controls and people with a variety of neurological diseases present with structural and hemodynamic changes of the extracranial venous system, we thought it was important to study how they might be involved in the development of Alzheimer’s disease and other important neurodegenerative conditions,” Zivadinov says.
The frequency of JVR increases with aging and its accumulated effects on cerebral circulation may take many years to develop. Patients are likely to be asymptomatic for a long time, which would explain why the condition is seen in both healthy people and those with neurological diseases.
Researchers from the University of Bradford, Taipei Veterans General Hospital in Taipei and National Yang-Ming University, and the Buffalo Neuroimaging Analysis Center in the University at Buffalo department of neurology contributed to the study.
Source: University at Buffalo
“Digital activism” is most often nonviolent and tends to work best when social media is combined with street-level organization, new research shows.
“This is the largest investigation of digital activism ever undertaken,” says Philip Howard, professor of communication, information, and international studies at the University of Washington. “We looked at just under 2,000 cases over a 20-year period, with a very focused look at the last two years.”Related Articles On Futurity
- New moms who blog may feel less alone
- Penn StateReal-time search worth $30M a day
- Carnegie Mellon UniversityProject recreates Shakespeare's social network
Howard and coauthors Frank Edwards and Mary Joyce, both doctoral students, oversaw 40 student analysts who reviewed news stories by citizen and professional journalists describing digital activism campaigns worldwide.
A year of research and refining brought the total down to between 400 and 500 well-verified cases representing about 150 countries.
A main finding of the report: Digital activism tends to be nonviolent, despite what many may think.
“In the news we hear of online activism that involves anonymous or cyberterrorist hackers who cause trouble and break into systems,” Howard says. “But that was two or three percent of all the cases—far and away, most of the cases are average folks with a modest policy agenda” that doesn’t involve hacking or covert crime.”
Other findings include:
- Digital activism campaigns tend to be more successful when waged against government rather than business authorities. There have been many activist campaigns against corporations, but they don’t seem to have succeeded as well as those that had governments for a target.
- Effective digital activism employs a number of social media tools. Tweeting alone is less successful. No single tool offers a guarantee of campaign success.
- Governments still tend to lag behind activist movements in the use and mastery of new social media tools. They sometimes use the same tools, but it’s always months after others have tried them.
These factors, taken together, “are the magic ingredients, especially when the target is a government—a real recipe for success.”
In time, the data gathered for this work might yield more insight into the world of digital activism, Howard says.
Unanswered questions include why there are regional disparities among digital tool use, why phones are prevalent but text messaging is rare in digital campaigns, and whether external political, social, or cultural phenomena influence patterns and the effectiveness of digital activism.
Funding for the research came from the United States Institute of Peace, the National Science Foundation, and the University of Washington department of communication.
Source: University of Washington
A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day, searching the internet for images, and doing its best to understand them on its own.
As NEIL’s visual database grows, the computer program gains common sense on a massive scale.
NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision.Related Articles On Futurity
- Michigan State UniversityPeer pressure fosters cybercrime
- University of IllinoisComputer ‘reads’ news to predict conflicts
- Brown UniversityGenetic disease born of faulty splicing
In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying—that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese.
Based on text references, it might seem that the color associated with sheep is black, but people—and NEIL—nevertheless know that sheep typically are white.
“Images are the best way to learn visual properties,” says Abhinav Gupta, assistant research professor in Carnegie Mellon University’s Robotics Institute. “Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”3 million images so far
A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.
“What we have learned in the last 5 to 10 years of computer vision research is that the more data you have, the better computer vision becomes,” Gupta says.When the computer gets it wrong
Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast—Facebook alone holds more than 200 billion images—that the only hope to analyze it all is to teach computers to do it largely by themselves.
Abhinav Shrivastava, a PhD student in robotics, says NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process.
A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.
“People don’t always know how or what to teach computers,” he says. “But humans are good at telling computers when they are wrong.”
People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for “apple” might return images of fruit as well as laptop computers.
But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.
As its search proceeds, NEIL develops subcategories of objects—tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations—that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.
NEIL is computationally intensive, the research team notes. The program runs on two clusters of computers that include 200 processing cores.
The Office of Naval Research and Google Inc. support the project. The research team will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia
Source: Carnegie Mellon University
The post Computer gets smarter by looking at online pics 24-7 appeared first on Futurity.
Even though female fence lizards with blue markings are more common than those without, the males seem to find the unmarked females more attractive, a new study shows.
The results of the research, which offers a snapshot into the evolution of male-female differences, appears in the early online edition of Biology Letters.
Male fence lizards of the species Sceloporus undulatus have bright blue “badges” outlined in black on both sides of their throats and abdomens, and previous studies have shown that testosterone drives the production of these badges, which are highly visible during the animal’s courtship rituals and other behavioral displays.
However, many females also have this blue ornamentation, although it is less vibrant and covers a smaller area.
“Just as some human females have male-pattern facial hair, albeit less pronounced than in males, some female fence lizards display the typically-male blue markings,” says Tracy Langkilde, an associate professor of biology at Penn State.
“However, whereas in human females the masculine characteristics are less common within the population, in fence lizards, we see the opposite pattern: About three quarters of the females are so-called ‘bearded ladies,’ making masculinized females much more common than their counterparts with little or no blue ornamentation.”
Using a combination of field observations and laboratory manipulations, Langkilde and graduate student Lindsey Swierk designed experiments to determine whether male lizards preferred the more-masculine bearded ladies or their more-feminine sisters.
“We found that, although males do not say ‘no’ to bearded ladies, they clearly discriminate against blue-ornamented females, opting more often to court females without coloring,” Swierk says. “The question is ‘why’? Is it possible the males mistake the bearded ladies for fellow males? Or are bearded ladies somehow less fit and, therefore, less attractive to males?”Lighter, later babies
To answer this last question, the team members studied the differences between the reproductive output of bearded ladies and the less-common females without male-pattern coloring.Related Articles On Futurity
- Duke University6-foot-long lizard shared planet with mammals
- University of QueenslandDNA uncovers identical deadly sea snakes
- Duke UniversityBirds can do it. (So can brainy lizards)
They found that, compared to their more-feminine counterparts, bearded ladies laid clutches that weighed less. In addition, they laid their eggs about 13 days later in the mating season. “The lower mass may indicate that the eggs have smaller yolks and so the embryos don’t have as many available nutrients,” Langkilde says.
“As for the timing, the 13-day difference is significant. It means that the bearded ladies’ offspring hatch later, so they have less time to gather food and to prepare for overwinter hibernation, which is a tough period that few babies survive.
“As a result, females with less blue coloration may have an evolutionary advantage with regard to the fitness of their offspring. This might explain why males tend to prefer them.”
Langkilde and Swierk hypothesize that, although bearded ladies currently are more common in many fence-lizard populations, the evolutionary tide might be turning. “What we might be observing is a gradual trend toward more sexual dimorphism within this species,” Swierk says.
Sexual dimorphism is defined as the difference in color, shape, size, or structure between males and females of the same species. For example, human males tend to be larger than human females and they also have other distinguishing characteristics such as stronger brow ridges and more facial and body hair.Signs of fitness
Darwin and others have suggested that one of the major factors driving these differences is sexual selection—the theory that an animal chooses a member of the opposite sex based on some observable feature that signals good health and superior genes.
Although the classic example of this phenomenon involves selection of males by females—namely, the male peacock’s elaborate and calorically expensive tail attracting the female peahen—sexual selection likely also explains why males are more attracted to females with certain “fitness-signaling” traits.
“It is possible that, over the course of several generations, we will see the more-feminine lizards winning out over their bearded-lady sisters,” Langkilde adds. “In time, the percentage of bearded ladies could dwindle and the balance could shift. However, another possibility is that bearded ladies have some other evolutionary advantage that keeps their numbers high within populations.”Sexy sons?
That other evolutionary advantage, the team members explain, could be behavioral. For example, bearded ladies, which likely have higher levels of testosterone, might be more aggressive and thus better able to fight off predators or competitors when compared to the more-feminine females.
“Bearded ladies also may be more sexually aggressive so, although the males don’t prefer them, they may initiate more of the courtship and mating and produce as many or more offspring for this reason,” Langkilde says. “Another possibility,” she adds, “is that bearded ladies may benefit by having especially sexy sons.”
The team’s previous research has shown that females prefer really blue males and so, “if these bearded ladies pass their vivid coloration on to their sons, this could give them an advantage by ensuring they have lots of grandchildren,” Langkilde says.
The National Science Foundation, a Gaylord Donnelley Environmental Fellowship, the National Geographic Society, and the Eppley Foundation for Research funded the research.
Source: Penn State
A combination of three gases could have created a greenhouse effect on Mars 3.8 billion years ago that made the planet warm enough for liquid water to flow across the surface.
In a new study published in Nature Geoscience, a research team uses a climate model to show that an atmosphere with sufficient carbon dioxide, water, and molecular hydrogen could have made the surface temperatures of Mars warm to above freezing.Related Articles On Futurity
- Brown UniversityWarming threatens one of world's oldest lakes
- Cardiff UniversityAfter crash, black holes 'ring' like bells
- Vanderbilt UniversityHow water helps us lose weight
That flowing water could have formed the ancient valley networks, such as Nanedi Valles, much the way sections of the Grand Canyon snake across the western United States today.
Previous efforts to produce temperatures warm enough to allow for liquid water used climate models that include only carbon dioxide and water and were unsuccessful.
“This is exciting because explaining how early Mars could have been warm and wet enough to form the ancient valleys had scientists scratching their heads for the past 30 years,” says Ramses M. Ramirez, a doctoral student working with James Kasting, a professor of geosciences at Penn State.
“We think we may have a credible solution to this great mystery.”Volcanoes, not meteorites
The researchers note that one alternative theory is that the Martian valleys formed after large meteorites bombarded the planet, generating steam atmospheres that then rained out. But this mechanism cannot produce the large volumes of water thought necessary to carve the valleys.
“We think that there is no way to form the ancient valleys with any of the alternate cold early Mars models,” says Ramirez. “However, the problem with selling a warm early Mars is that nobody had been able to put forth a feasible mechanism in the past three decades. So, we hope that our results will get people to reconsider their positions.”
Ramirez and post-doctoral researcher Ravi Kopparapu co-developed a one-dimensional climate model to demonstrate the possibility that the gas levels from volcanic activity could have created enough hydrogen and carbon dioxide to form a greenhouse and raise temperatures sufficiently to allow for liquid water.
Once they developed the model, Ramirez ran the model using new hydrogen absorption data and used it to recreate the conditions on early Mars, a time when the sun was about 30 percent less bright than it is today.
“It’s kind of surprising to think that Mars could have been warm and wet because at the time the sun was much dimmer,” Ramirez says.
Mars’ mantle appears to be more reduced than Earth’s, based on evidence from Shergotty, Nahkla, and Chassigny meteorites, Martian meteorites named for the towns near which they were found. A more reduced mantle outgasses more hydrogen relative to water, thus bolstering the hydrogen greenhouse effect.
“The hydrogen molecule is symmetric and appears to be quite boring by itself,” says Ramirez. “However, other background gases, such as carbon dioxide, can perturb it and get it to function as a powerful greenhouse gas at wavelengths where carbon dioxide and water don’t absorb too strongly. So, hydrogen fills in the gaps left by the other two greenhouse gases.”
Researchers on the project include Michael E. Zugger, senior research engineer, Applied Research Laboratory, Penn State; Tyler D. Robinson, University of Washington; and Richard Freedman, SETI Institute.
NASA Astrobiology Institute’s Virtual Planetary Laboratory supported the project.
Source: Penn State
Buyers are more likely to purchase something from an online classified ad if they think the seller is white, research shows.
A yearlong experiment selling iPods in about 1,200 online classified ads placed in more than 300 locales across the United States, ranging from small towns to major cities, tested for racial bias among buyers by featuring photographs of the iPod held by a man’s hand that was either dark-skinned (“black”), light-skinned (“white”), or light-skinned with a wrist tattoo. In all other respects, the photos were very similar.Related Articles On Futurity
- University of California, DavisOnline games stocked with bad food messages
- RutgersCollege fails to lower divorce for black women
- University of WarwickIn jobs with fewer men, women earn more
Black sellers did worse than white sellers on a variety of metrics: they received 13 percent fewer responses, 18 percent fewer offers, and offers that were 11 to 12 percent lower, according to the study that was conducted from March 2009 to March 2010.
The results were similar in magnitude to those associated with a white seller with a tattoo, which the authors included to serve as a “suspicious” white control group.
Published online in the Economic Journal of the Royal Economic Society, the findings also show that buyers corresponding with a black seller behave in ways suggesting they trust the seller less: they are 17 percent less likely to include their names, 44 percent less likely to agree to a proposed delivery by mail, and 56 percent more likely to express concern about making a long-distance payment.
“We were really struck to find as much racial discrimination as we did,” says Jennifer Doleac, assistant professor of public policy and economics at the University of Virginia.
At the time the ads were placed, among the 300-plus local ad sites, the average market had 15.7 other advertisements for iPod Nanos that had been listed in the previous week. Just 18 percent of the experiment’s ads were posted in markets with at least 20 other advertisements.‘Own-race’ sellers
In those thicker markets with at least 20 other iPod ads, black sellers received the same number of offers and equal best offers relative to whites.
Conversely, black sellers suffered particularly poor outcomes in thin markets with fewer buyers and sellers, where they received 23 percent fewer offers and best offers that were 12 percent lower—very similar to the results for the tattooed sellers’ ads.
Black sellers do worst in markets with high property crime rates and more racially segregated housing, suggesting that at least part of the explanation is “statistical discrimination”—that is, where race is used as a proxy for unobservable negative characteristics, such as more time or potential danger involved in the transaction, or the possibility that the iPod may be stolen—rather than simply “taste-based” discrimination (against race itself), Doleac says.
However, “it is also possible that animus against black sellers is higher in high-crime or high-isolation markets.”
Black sellers also do better in markets with larger black populations, “suggesting that the disparities may be driven, in part, by buyers’ preference for own-race sellers,” the researchers write.
The experiment ads all featured a silver, 8-gigabyte “current model” iPod nano digital media player, described as new in an unopened box, and for sale because the seller did not need it.Less underlying trust
The researchers never met with the buyers in person. Instead, when it came time to set up a meeting, they told the buyer they were out of town and offered to ship the iPod in exchange for payment via PayPal, an electronic payment system widely used for online person-to-person transactions.
This proposal is generally suspicious, as classified ad websites like Craigslist strongly advise users to deal locally only with in-person transactions, and avoid deals involving shipping or mailing or online payments. In response to this suspicious offer, those corresponding with black sellers reacted much more negatively, implying less underlying trust.
The average ad received 2.7 responses (probable scam responses were ignored), and the text of all subsequent email interactions was scripted to be consistent.
“The environment in which we conducted our experiment has many advantages,” Doleac says. “Buyers have no reason to make offers that they do not anticipate ending in a transaction. Trust also plays a key role in the interactions—the buyer expects to meet a seller in order to complete the transaction and faces the real possibility of deception or theft.
“These are characteristics of many ‘real-world’ market transactions that are not present in the markets considered by many other studies.
“We believe our study isolates the effect of race on market outcomes more convincingly than previous studies and provides some insight into why buyers are discriminating.”
Luke C.D. Stein, assistant professor of finance at Arizona State University was a co-author on the study, that was conducted while he and Doleac were doctoral students in economics at Stanford University.
Source: University of Virginia
The post Online shoppers more likely to buy from white sellers appeared first on Futurity.
A new study suggests that it might take a lot less carbon than previously thought to reach the global temperature scientists deem unsafe.
Even if emissions came to a sudden halt, the carbon dioxide already in Earth’s atmosphere could continue to warm our planet for hundreds of years.
The researchers simulated an Earth on which, after 1,800 billion tons of carbon entered the atmosphere, all carbon dioxide emissions suddenly stopped. Scientists commonly use the scenario of emissions screeching to a stop to gauge the heat-trapping staying power of carbon dioxide.
Within a millennium of this simulated shutoff, the carbon itself faded steadily with 40 percent absorbed by Earth’s oceans and landmasses within 20 years and 80 percent soaked up at the end of the 1,000 years.
By itself, such a decrease of atmospheric carbon dioxide should lead to cooling. But the heat trapped by the carbon dioxide took a divergent track.Oceans absorb less heat Related Articles On Futurity
- University of Colorado at BoulderUnmanned drones track Arctic seals
- Texas A&M UniversityWarming puts heat on U.S. military ops
- University of Colorado at BoulderArctic temps on the rise—faster and further
After a century of cooling, the planet warmed by 0.37 degrees Celsius (0.66 Fahrenheit) during the next 400 years as the ocean absorbed less and less heat. While the resulting temperature spike seems slight, a little heat goes a long way here. Earth has warmed by only 0.85 degrees Celsius (1.5 degrees Fahrenheit) since pre-industrial times.
The Intergovernmental Panel on Climate Change estimates that global temperatures a mere 2 degrees Celsius (3.6 degrees Fahrenheit) higher than pre-industrial levels would dangerously interfere with the climate system.
To avoid that point would mean humans have to keep cumulative carbon dioxide emissions below 1,000 billion tons of carbon, about half of which has already been put into the atmosphere since the dawn of industry.
The lingering warming effect the researchers found, however, suggests that the 2-degree point may be reached with much less carbon, says first author Thomas Frölicher, who conducted the work as a postdoctoral researcher at Princeton University.
“If our results are correct, the total carbon emissions required to stay below 2 degrees of warming would have to be three-quarters of previous estimates, only 750 billion tons instead of 1,000 billion tons of carbon,” says Frölicher, now a researcher at the Swiss Federal Institute of Technology in Zurich. “Thus, limiting the warming to 2 degrees would require keeping future cumulative carbon emissions below 250 billion tons, only half of the already emitted amount of 500 billion tons.”Hard to stop climate change
The researchers’ work contradicts a scientific consensus that the global temperature would remain constant or decline if emissions were suddenly cut to zero. But previous research did not account for a gradual reduction in the oceans’ ability to absorb heat from the atmosphere, particularly the polar oceans, Frölicher notes.
Although carbon dioxide steadily dissipates, Frölicher and his co-authors were able to see that the oceans that remove heat from the atmosphere gradually take up less. Eventually, the residual heat offsets the cooling that occurred due to dwindling amounts of carbon dioxide.
The research shows that the change in ocean heat uptake in the polar regions has a larger effect on global mean temperature than a change in low-latitude oceans, a mechanism known as “ocean-heat uptake efficacy.” This mechanism was first explored in a 2010 paper by Frölicher’s co-author, Michael Winton, a researcher at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL) at Princeton.
“The regional uptake of heat plays a central role. Previous models have not really represented that very well,” Frölicher says.
“Scientists have thought that the temperature stays constant or declines once emissions stop, but now we show that the possibility of a temperature increase can not be excluded,” Frölicher adds. “This is illustrative of how difficult it may be to reverse climate change—we stop the emissions, but still get an increase in the global mean temperature.”
The study, published in the journal Nature Climate Change, was funded by the Swiss National Science Foundation and Princeton University Carbon Mitigation Initiative.
Source: Princeton University
The post Planet may warm for centuries, even if CO2 shuts off appeared first on Futurity.
Scientists have developed a method that enables more-accurate prediction of how RNA molecules fold within living cells.
The findings may shed new light on how plants—as well as other living organisms—respond to environmental conditions.
Potential implications of the methodology for human health include, for example, learning how an infection-induced fever could affect the RNA structures of both humans and pathogens.Related Articles On Futurity
- Northwestern UniversityAtomic snapshot shows how RNA ruled
- Brown UniversityPlants tinker to improve marketability
- University of California, DavisBetter way to brew cold-weather biodiesel
A paper by the research team, led by Penn State’s Sarah M. Assmann, professor of biology, and Philip Bevilacqua, professor of chemistry, appears in Nature.
“Scientists have studied a few individual RNA molecules, but now we have data on almost all the RNA molecules in a cell—more than 10,000 different RNAs,” Assmann says. “We are the first to determine, on a genome-wide basis, the structures of the RNA molecules in a plant, or in any living organism.”Creating better crops
Temperature and drought are among the environmental stress factors that affect the structure of RNA molecules, thereby influencing how genes are “expressed”—how their functions are turned on or turned off.
“Climate change is predicted to cause increasingly extreme and unpredictable heat waves and droughts, which would impact our food crops, in part by affecting the structures of their RNA molecules and so influencing their translation into proteins,” Bevilacqua says.
“The more we understand about how environmental factors affect RNA structure and thereby influence gene expression, the more we may be able to breed—or develop with biotechnological methods—crops that are more resistant to those stresses. Such crops, which could perform better under more-marginal conditions, could help feed the world’s growing population.”
The project involved determining the structures of the varieties of RNA molecules in a plant named Arabidopsis thaliana. This plant is used worldwide as a model species for scientific research. Arabidopsis thaliana, commonly known as mouse-ear cress, is an ideal organism for RNA studies, the researchers say, because it is the first plant species to have its full genome sequenced and has the greatest number of genetic tools available.How does RNA folding work?
RNA is the intermediate molecule between DNA and proteins in all living things. It is a critical component in the pathway of gene expression, which controls an organism’s function. Unlike the double-stranded DNA molecule, which is compressed into cells by twisting and wrapping around proteins, RNA is single stranded, and folds back on itself.
The researchers set out to answer the question, How exactly does RNA fold in a cell and how does that folding regulate gene function? “We needed a tool to answer that question,” says Bevilacqua.
“That tool involves introducing a chemical into the plant that can modify some segments of the RNA but not others, which then gives a readout of the structure of the RNA. Using this technique we can figure out which classes of genes are associated with certain RNA structural traits. And we can try to understand how these RNA structural changes relate to certain biological functions.”Finding a pattern
“Previously, researchers would query the structures of individual RNAs in a cell one by one, and it was a tedious process,” says Assmann. “You can’t abstract rules or generalities about how RNAs are behaving just from knowing the structures of one or a few RNAs—you can’t get a pattern.
“Now that we have genome-wide information for a particular organism, we can start to abstract patterns of how RNA structure influences gene expression and ultimately plant function. Other scientists can query their organisms of interest and ask what rules they can abstract. Are there universal rules that will be true for all organisms for how RNA structure influences gene expression?”
Bevilacqua adds, “Because RNA is so central in its role in gene regulation, the tools we’ve developed can be transferred to scientists who are working with essentially any biological system.”
The Human Frontiers Science Program (HFSP), Penn State Eberly College of Science, and the Penn State Huck Institutes funded the research.
Source: Penn State