SULAIR Home

Futurity.org

Syndicate content
Research news from top universities.
Updated: 23 min 49 sec ago

Gender differences are smaller than we think

7 hours 17 min ago

Although gender plays a big part in our identities, new research finds men and woman aren’t as different as we tend to think.

Gender stereotypes can influence beliefs and create the impression that the differences are large, says Zlatan Krizan, an associate professor of psychology at Iowa State University.

To separate fact from fiction, Krizan and colleagues conducted a meta-synthesis of more than 100 meta-analyses of gender differences. Combined, the studies they aggregated included more than 12 million people.

Their report, published in American Psychologist, found an almost 80 percent overlap for more than 75 percent of the psychological characteristics, such as risk-taking, occupational stress, and morality. Simply put, our differences are not so profound.

“This is important because it suggests that when it comes to most psychological attributes, we are relatively similar to one another as men and women,” Krizan says.

“This was true regardless of whether we looked at cognitive domains, such as intelligence; social personality domains, such as personality traits; or at well-being, such as satisfaction with life.”

10 significant gaps

The similarities were also consistent regardless of age and over time. However, researchers don’t dispute that men and women have their differences.

They identified 10 attributes in which there was a significant gap between genders. Some of these characteristics fell in line with stereotypes. For example, men were more aggressive and masculine, while women had a closer attachment to peers and were more sensitive to pain.

If we’re so similar, why do we think we’re different?

The purpose of the meta-synthesis was not to identify why men and women are different, but measure by how much.

Extremes can be misleading

The results contradict what many people think, and Krizan has a few explanations as to why. One reason is the difference in extremes. The evidence researchers aggregated focuses on a typical range of characteristics, but on the far end of the spectrum the differences are often exaggerated, Krizan says.

“People tend to overestimate the differences because they notice the extremes,” Krizan says.

He uses aggression as one example. “If you look at incarceration rates to compare the aggressiveness of men and women, the fact that men constitute the vast majority of the prison population supports the idea that men are extremely more aggressive. However, it’s a misleading estimate of how much typical men and women differ on aggressiveness, if that’s the only thing you look at for comparison,” he says.

Additionally, people notice multiple differences simultaneously, which can give the impression of a larger effect. Researchers looked at the average for each trait individually rather than a combination of differences.

“The difference on any one trait is pretty small,” Krizan says. “When there are several smaller differences, people might think there’s a big difference because the whole configuration has a different flavor. I think they make a mistake assuming that any given trait is very different from typical men to women.”

Related Articles On Futurity

Researchers also point out that they did not try to determine to what extent these differences reflect real, physical or biological differences between genders. For example, do men tolerate more pain because they believe that is what they should do as a man? Krizan says some behavioral differences may be learned through social roles.

Although men may be said to come from Mars and women from Venus, these findings remind us that we all come from Earth after all, he adds.

Krizan worked on the study with Ethan Zell, an assistant professor at the University of North Carolina at Greensboro, and Sabrina Teeter, a graduate student at Western Carolina University.

Source: Iowa State University

The post Gender differences are smaller than we think appeared first on Futurity.

This is how phishing scams trick you

8 hours 56 min ago

After all the warnings, how do people still fall for email “phishing” scams? New research shows how certain strategies on the part of the scammers can affect recipients’ thinking and increase their chances of falling victim.

“Information-rich” emails include graphics, logos, and other brand markers that communicate authenticity, says study coauthor Arun Vishwanath, professor of communication at the University at Buffalo.

“In addition,” he says, “the text is carefully framed to sound personal, arrest attention, and invoke fear. It often will include a deadline for response for which the recipient must use a link to a spoof ‘response’ website.

“Such sites, set up by the phisher, can install spyware that data mines the victim’s computer for usernames, passwords, address books, and credit card information.

Why ‘presence’ matters

“We found that these information-rich lures are successful because they are able to provoke in the victim a feeling of social presence, which is the sense that they are corresponding with a real person,” Vishwanath says.

“‘Presence’ makes a message feel more personal, reduces distrust, and also provokes heuristic processing, marked by less care in evaluating and responding to it,” he says. “In these circumstances, we found that if the message asks for personal information, people are more likely to hand it over, often very quickly.

“In this study,” he says, “such an information-rich phishing message triggered a victimization rate of 68 percent among participants.

“These are significant findings that indicate the importance of developing anti-phishing interventions that educate individuals about the threat posed by richness and presence cues in emails,” he explains.

68 percent fell for it

The study involved 125 undergraduate university students—a group often targeted by phishers—who were sent an experimental phishing email from a Gmail account prepared for use in the study. The message used a reply-to address and sender’s address, both of which included the name of the university.

The email was framed to emphasize urgency and invoke fear. It said there was an error in the recipients’ student email account settings that required them to use an enclosed link to access their account settings and resolve the problem.

Related Articles On Futurity

They had to do so within a short time period, they were told, otherwise they would no longer have access to the account. In a real phishing expedition, the enclosed link would take them to an outside account/phishing site that would collect the respondent’s personal information.

Vishwanath says 49 participants replied to the phishing request immediately and another 36 replied after a reminder. The respondents then completed a five-point scale that measured how they processed information while deciding what to do with the email. When a few other variables were factored in, the phishing attack had an overall success rate of 68 percent.

“With email becoming the dominant way of communicating worldwide,” Vishwanath says, “the phishing trend is expected to increase as technology becomes more advanced and phishers find new ways to appeal to their victims.

“While these criminals may not be easily stopped, understanding what makes us more susceptible to these attacks is a vital advancement in protecting internet users worldwide.”

The study was presented at the 48th Hawaii International Conference on System Sciences, held January 5-8 at the University of Hawaii.

Source: University at Buffalo

The post This is how phishing scams trick you appeared first on Futurity.

Is this kid too young for football?

9 hours 9 min ago

As the 100 million viewers tuning in to this Sunday’s Super Bowl can attest, Americans adore football. And for many, the love affair begins in childhood.

But a new study points to a possible increased risk of cognitive impairment from playing youth football.

Researchers from Boston University School of Medicine found that former National Football League (NFL) players who participated in tackle football before the age of 12 are more likely to have memory and thinking problems as adults.

The study contradicts conventional wisdom that children’s more plastic brains might recover from injury better than those of adults, and suggests that they may actually be more vulnerable to repeated head impacts, especially if injuries occur during a critical period of growth and development.

“Sports offer huge benefits to kids, as far as work ethic, leadership, and fitness, and we think kids should participate,” says study lead author Julie Stamm, a PhD candidate in anatomy and neurobiology. “But there’s increasing evidence that children respond differently to head trauma than adults.

“Kids who are hitting their heads over and over during this important time of brain development may have consequences later in life.”

“This is one study, with limitations,” adds study senior author Robert Stern, a professor of neurology, neurosurgery, and anatomy and neurobiology and director of the Alzheimer’s Disease Center’s Clinical Core. “But the findings support the idea that it may not make sense to allow children—at a time when their brain is rapidly developing—to be exposed to repetitive hits to the head.

“If larger studies confirm this one, we may need to consider safety changes in youth sports.”

In the study, researchers reexamined data from Boston University’s ongoing DETECT (Diagnosing and Evaluating Traumatic Encephalopathy Using Clinical Tests) study, which aims to develop methods of diagnosing chronic traumatic encephalopathy (CTE) during life.

CTE is a neurodegenerative disease often found in professional football players, boxers, and other athletes who have a history of repetitive brain trauma. It can currently be diagnosed only by autopsy.

42 former NFL players

For this latest study, published in the journal Neurology, scientists examined test scores of 42 former NFL players, with an average age of 52, all of whom had experienced memory and thinking problems for at least six months. Half the players had played tackle football before age 12, and half had not.

Significantly, the total number of concussions was similar between the two groups.

Researchers found that the players exposed to tackle football before age 12 had greater impairment in mental flexibility, memory, and intelligence—a 20 percent difference in some cases. These findings held up even after statistically removing the effects of the total number of years the participants played football. Both groups scored below average on many of the tests.

“We were surprised by how striking the results were,” says Stamm. “Every single test was significantly different, by a lot.”

Stamm says the researchers were especially surprised by the scores on a reading test called the WRAT-4, which has participants read words of increasing difficulty. A person’s score depends on the ability to pronounce the words correctly, indicating the person’s familiarity with complex vocabulary.

The low scores may be significant, she says, because they suggest that repeated head trauma at a young age might limit peak intelligence. She emphasizes, however, that there may be other reasons for a low score, and that more research is needed.

Why age 12?

The authors chose age 12 as the cutoff because significant peaks in brain development occur in boys around that age. (This happens for girls a bit earlier, on average.) Around age 12, says Stern, blood flow to the brain increases, and brain structures such as the hippocampus, which is critical for memory, reach their highest volume.

Boys’ brains also reach a peak in their rate of myelination—the process in which the long tendrils of brain cells are coated with a fatty sheath, allowing neurons to communicate quickly and efficiently. Because of these developmental changes, Stern says, this age may possibly represent a “window of vulnerability,” when the brain may be especially sensitive to repeated trauma.

“If you take just the hippocampus, that’s a really important part of your brain,” he says. “It may be that if you hit your head a lot during this important period, you might have significant memory problems later on.”

Stern adds that a study by another group of researchers of the number and severity of hits in football players aged 9 to 12, using accelerometers in helmets, found that players received an average of 240 high-magnitude hits per season, sometimes with a force similar to that experienced by high school and college players.

Football in America

With approximately 4.8 million athletes playing youth football in the United States, the long-term consequences of brain injury represent a growing public health concern. This study comes at a time of increasing awareness of the dangers of concussions—and subconcussive hits—in youth sports like football, hockey, and soccer.

Related Articles On Futurity

In 2012, Pop Warner football, the oldest and largest youth football organization in the country, changed its rules to limit contact during practices and banned intentional head-to-head contact. When reached by phone at the organization’s headquarters in Langhorne, Pennsylvania, a Pop Warner spokesman declined to comment on the study until they had more time to examine the results in detail.

“Football has the highest injury rate among team sports,” writes Christopher M. Filley, a fellow with the American Academy of Neurology, in an editorial accompanying the Neurology article. “Given that 70 percent of all football players in the United States are under the age of 14, and every child aged 9 to 12 can be exposed to 240 head impacts during a single football season, a better understanding of how these impacts may affect children’s brains is urgently needed.”

Filley’s editorial cautions that the study has limitations: because the researchers could not precisely determine the players’ lifetime number of head impacts, it may be the total number of hits—rather than the age of a player—that is the more critical measurement.

In addition, because the study focuses on professional athletes, the results may not apply to recreational players who participated in youth football, but did not play beyond high school.

How to make football safer for kids

Stamm says that the next stage of research is to work with colleagues at Brigham and Women’s Hospital to conduct detailed neuroimaging of the same types of players involved in the current study, looking for underlying changes in brain anatomy that might correlate to the cognitive impairment.

She adds that this paper is a small, first-of-its-kind study, and needs to be expanded and replicated before scientists can make further recommendations about children playing contact sports. But she hopes the study will shed more light on the possible consequences of repeated head trauma in children.

She notes that some youth football organizations have taken great steps in reducing the numbers of hits to the head. However, more research is needed to see if these measures are sufficient, or if additional precautions, like substituting flag football for tackle football in those under 12, may be necessary.

“Sports are important, and we want kids to participate in football,” says Stamm. “But no eight-year-old should play a sport with his friends and end up with long-term problems. We just want kids to play sports more safely.”

The National Institutes of Health funded the study.

Source: Boston University

The post Is this kid too young for football? appeared first on Futurity.

Globalization’s first wave wasn’t all positive

9 hours 51 min ago

150 years ago, the steamship made international trade possible for many countries. Only a few countries benefited from this first wave of globalization, however.

Most ended up worse-off, according to a new study.

This is proof that international trade doesn’t automatically lead to economic prosperity, says Luigi Pascali, a professor of economics in the Centre for Competitive Advantage in the Global Economy (CAGE) at the University of Warwick.

Until the mid-1800s, the distribution of goods around the world was determined by sailing vessels, which relied on global wind patterns to get from coast to coast.

But the steamship dramatically changed the way the world did business and led to a marked acceleration in the buying and selling of goods on an international scale—it was the first wave of globalization.

“This is an ideal testing ground in which to observe the effects that globalization can have on economic development—albeit only for a brief period of history,” says Pascali, whose findings are available in a working paper.

“I looked at a novel set of data from the time and used it to make trade predictions focusing on urbanization rates, population densities, and per-capita incomes.

Related Articles On Futurity

“What I found was that the majority of nations actually lost out as a result of globalization during this short period in history—which astonishingly goes against the widely held belief that globalization generally has a positive impact on the world.

“What also became clear from the study was that it was only a small set of core nations with inclusive political institutions that benefited from international trade, whilst the negative effect was felt by countries characterized by absolute power—which was the majority at the time,” says Pascali.

“What my study shows is that inclusive political institutions are vital to ensuring globalization results in prosperity and history presents a warning to modern day policy-makers that economic development shouldn’t be taken for granted,” he concludes.

Source: University of Warwick

The post Globalization’s first wave wasn’t all positive appeared first on Futurity.

‘Parasitic’ genes let mammals evolve pregnancy

9 hours 54 min ago

Transposons, also called “jumping genes,” were a key part of the evolution of pregnancy among mammals, report scientists.

They found thousands of genes that evolved to be expressed in the uterus in early mammals, including many that are important for maternal-fetal communication and suppression of the immune system.

“…I guess we owe the evolution of pregnancy to what are effectively genomic parasites”

Surprisingly, these genes appear to have been recruited and repurposed from other tissue types by transposons—ancient mobile genetic elements sometimes thought of as genomic parasites.

“For the first time, we have a good understanding of how something completely novel evolves in nature, of how this new way of reproducing came to be,” says study author Vincent Lynch, assistant professor of human genetics at the University of Chicago.

“Most remarkably, we found the genetic changes that likely underlie the evolution of pregnancy are linked to domesticated transposable elements that invaded the genome in early mammals. So I guess we owe the evolution of pregnancy to what are effectively genomic parasites.”

The study appears online in Cell Reports.

From people to pigs to platypus

To study genetic changes during the evolution of pregnancy in mammals, Lynch and his colleagues used high-throughput sequencing to catalog genes expressed in the uterus of several types of living animals—placental mammals (a human, monkey, mouse, dog, cow, pig, horse, and armadillo), a marsupial (opossum), an egg-laying mammal (platypus), a bird, a reptile, and a frog.

Then they used computational and evolutionary methods to reconstruct which genes were expressed in ancestral mammals.

The researchers found that as the first mammals evolved—and resources for fetal development began to come more from the mother and less from a yolk—hundreds of genes that are important for cellular signaling, metabolism, and uterine development started to be expressed in the uterus.

As the eggshell was lost and live-birth evolved in the common ancestor to marsupials and placental mammals, more than 1,000 genes were turned on, many of which were strongly linked to the establishment of maternal-fetal communication.

As prolonged pregnancy evolved in placental mammals, hundreds of genes began to be expressed that greatly strengthened and elaborated maternal-fetal communication, as well locally suppressing the maternal immune system in the uterus—thus protecting the developing fetus.

The team also identified hundreds of genes that were turned off as these lineages evolved, many of which had been involved in eggshell formation.

“We found lots of genes important for maintaining hormone signaling and mediating maternal-fetal communication, which are essential for pregnancy, evolved to be expressed in the uterus in early mammals,” Lynch says.

“But immune suppression genes stand out. The fetus is genetically distinct from the mother. If these immune genes weren’t expressed in the uterus, the fetus would be recognized by the mother’s immune system as foreign and attacked like any other parasite.”

Genes get new jobs

In addition to function, Lynch and his colleagues investigated the origin of these genes. They found most already had roles in other organ and tissue systems such as the brain, digestive, and circulatory systems.

But during the evolution of pregnancy, these genes were recruited to be expressed in the uterus for new purposes. They evolved regulatory elements that allowed them to be activated by progesterone, a hormone critical in reproduction.

The team found that this process was driven by ancient transposons—stretches of non-protein coding DNA that can change their position within the genome.

‘Genomic parasites’

Sometimes called “jumping genes,” transposons are generally thought to be genomic parasites that serve only to replicate themselves. Many of the ancient mammalian transposons possessed progesterone binding sites that regulate this process. By randomly inserting themselves into other places in the genome, transposons appear to have passed on this activation mechanism to nearby genes.

Related Articles On Futurity

“Genes need some way of knowing when and where to be expressed,” Lynch says. “Transposable elements appear to have brought this information, allowing old genes to be expressed in a new location, the uterus, during pregnancy. Mammals very likely have a progesterone-responsive uterus because of these transposons.”

Lynch and his colleagues note their findings represent a novel explanation for how entirely new biological structures and functions arise. Rather than genes gradually evolving uterine expression one at a time, transposable elements coordinated large-scale, genome-wide changes that allowed numerous genes to be activated by the same signal—in this case, progesterone, which helped drive the evolution of pregnancy.

“It’s easy to imagine how evolution can modify an existing thing, but how new things like pregnancy evolve has been much harder to understand,” Lynch says. “We now have a new mechanistic explanation of this process that we’ve never had before.”

The Burroughs Wellcome Preterm Birth Initiative and the John Templeton Foundation supported the work.

Source: University of Chicago

The post ‘Parasitic’ genes let mammals evolve pregnancy appeared first on Futurity.

These 2 genes trigger deadly ovarian cancer

Thu, 01/29/2015 - 12:05

By creating the first mouse model of aggressive ovarian cancer, researchers say they may have uncovered a better way to diagnose and treat it.

“It’s an extremely aggressive model of the disease, which is how this form of ovarian cancer presents in women,” says study leader Terry Magnuson, a professor and chair of genetics at the UNC School of Medicine.

Magnuson’s team discovered how two genes interact to trigger the cancer and then spur on its development.

Not all mouse models of human diseases provide accurate depictions of the human condition. Magnuson’s mouse model, though, is based on genetic mutations found in human cancer samples.

Mutations in two genes—ARID1A and PIK3CA—were previously unknown to cause cancer. “When ARID1A is less active than normal and PIK3CA is overactive,” Magnuson explains, “the result is ovarian clear cell carcinoma 100 percent of the time in our model.”

Drug therapy

The research also showed that a drug called BKM120, which suppresses PI3 kinases, directly inhibited tumor growth and significantly prolonged the lives of mice. The drug is currently being tested in human clinical trials for other forms of cancer.

The work, published in the journal Nature Communications, was spearheaded by Ron Chandler, a postdoctoral fellow in Magnuson’s lab.

Chandler had been studying the ARID1A gene—which normally functions as a tumor suppressor in people—when results from cancer genome sequencing projects showed that the ARID1A gene was highly mutated in several types of tumors, including ovarian clear cell carcinoma. Chandler began researching the gene’s precise function in that disease and found that deleting it in mice did not cause tumor formation or tumor growth.

“We found that the mice needed an additional mutation in the PIK3CA gene, which acts like a catalyst of a cellular pathway important for cell growth,” Chandler says. Proper cell cycle regulation is crucial for normal cell growth. When it goes awry, cells can turn cancerous.

“Our research shows why we see mutations of both ARID1A and PIK3CA in various cancers, such as endometrial and gastric cancers,” Chandler adds. “Too little expression of ARID1A and too much expression of PIK3CA is the perfect storm; the mice always get ovarian clear cell carcinoma. This pair of genes is really important for tumorigenesis.”

Inflammation’s role

Magnuson’s team also found that ARID1A and PIC3CA mutations led to the overproduction of Interleukin-6, or IL-6, which is a cytokine—a kind of protein crucial for cell signaling that triggers inflammation.

“We don’t know if inflammation causes ovarian clear cell carcinoma, but we do know it’s important for tumor cell growth,” Chandler says.

Magnuson adds, “We think that IL-6 contributes to ovarian clear cell carcinoma and could lead to death. You really don’t want this cytokine circulating in your body.”

Magnuson says that treating tumor cells with an IL-6 antibody suppressed cell growth, which is why reducing IL-6 levels could help patients.

New screening tool?

Although this research is necessary to find better cancer treatments, Magnuson and Chandler say their finding could open the door to better screening tools.

Related Articles On Futurity

“If we can find something measurable that’s downstream of ARID1A—such as a cell surface protein or something else we could tease apart—then we could use it as a biomarker of disease,” Chandler adds. “We could create a way to screen women.

“Right now, by the time women find out they have ovarian clear cell carcinoma, it’s usually too late. If we can find it earlier, we’ll have much better luck successfully treating patients.”

The National Institutes of Health funded this research. Chandler was supported by a postdoctoral fellowship from the American Cancer Society and an Ann Schreiber Mentored Investigator Award from the Ovarian Cancer Research Fund.

Additional researchers from UNC contributed to the paper, and Duke University postdoctoral fellow Jeffrey Damrauer was also a coauthor; he was a graduate student in the Kim lab during this study.

Source: UNC Chapel Hill

The post These 2 genes trigger deadly ovarian cancer appeared first on Futurity.

Why Mars has 2 wildly different hemispheres

Thu, 01/29/2015 - 10:39

The two hemispheres of Mars are dramatically different from each other—a characteristic not seen on any other planet in our solar system.

Non-volcanic, flat lowlands characterize the northern hemisphere, while highlands punctuated by countless volcanoes extend across the southern hemisphere.

Scientists can’t agree on what caused the differences, but ETH Zurich geophysicist Giovanni Leone is offering a new explanation.

Leone and colleagues have concluded that a large celestial object must have smashed into the Martian south pole in the early history of the solar system. Their computer simulation shows that this impact generated so much energy that it created a magma ocean, which would have extended across what is today’s southern hemisphere.

Crust like a crème brûlée

The celestial body that struck Mars must have been at least one-tenth the mass of Mars to unleash enough energy to create this magma ocean. The molten rock eventually solidified into the mountainous highlands that today comprise the southern hemisphere of Mars.

In their simulation, the researchers assumed:

  • The celestial body consisted to a large degree of iron
  • It had a radius of at least 1,600 kilometers (994 miles)
  • It crashed into Mars at a speed of five kilometers/second (three miles/second).

They estimated the event occurred around 4 to 15 million years after Mars formed. The planet’s crust must have been very thin at that time, like the hard, caramelized surface of a crème brûlée. And, just like the popular dessert, hiding beneath the surface was a liquid interior.

When the celestial object hit, it added more mass to Mars, particularly iron. But the simulation also found that it triggered strong volcanic activity. Around the equator in particular, numerous mantle plumes were generated as a consequence of the impact, which migrated to the south pole where they ended. Mantle plumes are magma columns that transport liquid material from the mantle to the surface.

Are other theories wrong?

In the model, the researchers found that activity on Mars died down around 3.5 billion years ago, after which time the Red Planet experienced neither volcanic activity nor a magnetic field—this is consistent with observations and measurements.

Earlier theories posited the opposite, namely that there must have been a gigantic impact or many smaller strikes against the northern hemisphere. The most important theory about the origin of the Mars dichotomy was formulated by two American researchers in 1984 in an article in the journal Nature.

They postulated that a large celestial object struck the Martian north pole. In 2008 a different team revived this idea and published it once again in Nature.

This theory did not convince Leone: “Our scenarios more closely reflect a range of observations about Mars than the theory of a northern hemisphere impact,” states Leone.

The volcanoes on Mars are very unevenly distributed: they are common and widespread on the southern hemisphere, but are rare and limited to only a few small regions in the northern hemisphere.

“Our model is an almost identical depiction of the actual distribution of volcanic identity,” asserts Leone. According to the researcher, no other model has been able to portray or explain this distribution before.

“Our simulation was also able to reproduce the different topographies of the two hemispheres in an nearly realistic manner.”

And he goes on to explain that the model—depending on the composition of the impact body chosen—is a virtually perfect representation of the size and shape of the hemispheres.

One condition, however, is that the celestial body impacting Mars consist of 80 percent iron; when the researchers simulated the impact with a celestial body made of pure silicate rock, the resulting image did not correspond to the reality of the dichotomy.

Too hostile for water or life?

Lastly, the model developed by the ETH researchers confirmed the date on which the magnetic field on Mars ceased to exist. The date calculated by the model corresponds to around 4.1 billion years ago, a figure previously proven by other scientists.

The model also demonstrates why it ceased: a sharp decrease in heat flow from the core into the mantle and the crust in the first 400 million years after the impact. After a billion years, the heat flow was only one-tenth its initial value, which was too low to maintain even the volcanism.

Related Articles On Futurity

The model’s calculations closely match previous calculations and mineralogical explorations.

The volcanic activity is related to the heat flow, explains Leone, though the degree of volcanic activity could be varied in the simulation and influenced by the strength of the impact.

This, he states, is in turn linked to the size and composition of the celestial object. In other words, the larger it is, the stronger the volcanic activity is.

Nevertheless, after one billion years the volcanic vents were extinguished—regardless of the size of the impact.

It has become increasingly clear to Leone that Mars has always been an extremely hostile planet, and he considers it almost impossible that it ever had oceans or even rivers of water.

“Before becoming the cold and dry desert of today, this planet was characterized by intense heat and volcanic activity, which would have evaporated any possible water and made the emergence of life highly unlikely.”

The study appears in the journal Geophysical Research Letters.

Source: ETH Zurich

The post Why Mars has 2 wildly different hemispheres appeared first on Futurity.

You can join the 1 percent, but you can’t stay

Thu, 01/29/2015 - 10:37

A typical American has a one in nine shot of hitting the jackpot and joining the wealthiest 1 percent for at least one year of his or her working life, say researchers.

There’s bad news, too, however: only an elite few get to stay in that economic stratosphere—and nonwhite workers remain among those who face far longer odds.

“Rather than static groups that experience continual high levels of economic attainment, there would appear to be more movement into and out of these income levels,” write Mark Rank, a professor of social welfare at the Brown School at Washington University in St. Louis, and Tom Hirschl, professor of development sociology at Cornell University.

“Education, marriage, and race are among the strongest predictors of top-level income, and in particular the race effect suggests persistent patterns of social inequality,” they write.

The research builds upon findings presented in their book Chasing the American Dream: Understanding What Shapes Our Fortunes (Oxford University Press, 2014), which analyzes social mobility at the lower end of America’s economic spectrum. The latest research from Rank and Hirschl uses a new “life course” methodology to examine social mobility at the top levels of income distribution.

Climbing the ranks

Relying on data collected regularly since 1968 as part of the University of Michigan’s Panel Study of Income Dynamics, this life-course approach analyzed thousands of people from ages 25-60, and examined long stretches of their work lives to track economic movement.

This large-scale, long-term observation, reported in PLOS ONE, provides some surprising results:

  • By age 60, almost 70 percent of the working population will experience at least one year in the top 20 percent of income earners.
  • More than half (53 percent) will have at least one year among the top 10 percent.
  • Slightly more than 11 percent will spend at least one year as members of the top 1 percent.

While Rank and Hirschl found substantial fluidity among the ranks of America’s wealthiest, they also noticed that very few get to stay among the ranks of the super rich for very long.

While 70 percent of the working population may hit the top 20 percent of earners, barely 20 percent will stay for 10 consecutive years or more. At the very top, while 1 in 9 Americans may at some time in their careers be among the top 1 percent, fewer than one in 160 (0.6 percent) will stay for a decade or more.

“Attaining 10 consecutive years at the top is rare, and reflects the idea that only a few persist at this elite level,” the authors write.

Education, marriage, race

The researchers note that the generally high level of turnover among the top ranks of earners can work to buffer economic inequality.

Related Articles On Futurity

They also found this higher-than-expected fluidity to be a double-edged sword—while it demonstrates relatively widespread opportunity for top-level income, it also creates a very real insecurity among those who reach those heights.

Lastly, Rank and Hirschl uncovered another “contentious social implication” in their research: When looking at demographic patterns among the people whose data was analyzed, being educated, being married, and being white were among the strongest predictors of reaching the economic peak.

“It would be misguided to presume that top-level income attainment is solely a function of hard work, diligence, and equality of opportunity,” they write.

“A more nuanced interpretation includes the proposition that access to top-level income is influenced by historic patterns of race and class inequality.”

Source: Washington University in St. Louis

The post You can join the 1 percent, but you can’t stay appeared first on Futurity.

‘Safe’ pesticide could be an ADHD culprit

Thu, 01/29/2015 - 08:25

New research suggests that a commonly used pesticide found on lawns, golf courses, and vegetable crops may raise the risk of attention deficit hyperactivity disorder (ADHD).

The pesticide may alter the development of the brain’s dopamine system—which is responsible for emotional expression and cognitive function.

Mice exposed to the pyrethroid pesticide deltamethrin in utero and through lactation exhibited several features of ADHD, including dysfunctional dopamine signaling in the brain, including hyperactivity, attention deficits, and impulsive-like behavior.

Attention deficit hyperactivity disorder most often affects children, with an estimated 11 percent of children between the ages of 4-17—about 6.4 million—diagnosed as of 2011. Boys are three to four times more likely to be diagnosed than girls.

While early symptoms, including an inability to sit still, pay attention, and follow directions, begin between the ages of 3 to 6, diagnosis is usually made after the child starts attending school full-time.

More trouble for males

Importantly, in this study, the male mice were affected more than the female mice, similar to what is observed in children with ADHD.

The ADHD-like behaviors persisted in the mice through adulthood, even though the pesticide, considered to be less toxic and used on golf courses, in the home, and on gardens, lawns, and vegetable crops, was no longer detectable in their systems.

Although there is strong scientific evidence that genetics plays a role in susceptibility to the disorder, no specific gene has been found that causes ADHD and scientists believe that environmental factors may also contribute to the development of the behavioral condition.

Using data from the Centers for Disease Control, National Health and Nutrition Examination Survey (NHANES) the study analyzed health care questionnaires and urine samples of 2,123 children and adolescents.

Researchers asked parents whether a physician had ever diagnosed their child with ADHD and cross-referenced each child’s prescription drug history to determine if any of the most common ADHD medications had been prescribed.

Children with higher pyrethroid pesticide metabolite levels in their urine were more than twice as likely to be diagnosed with ADHD.

Deltamethrin exposure

These findings provide strong evidence, using data from both animal models and humans, that exposure to pyrethroid pesticides, including deltamethrin, may be a risk factor for ADHD, says lead author Jason Richardson, associate professor in the department of environmental and occupational medicine at Rutgers Robert Wood Johnson Medical School and a member of the Environmental and Occupational Health Sciences Institute (EOHSI).

Related Articles On Futurity

“Although we can’t change genetic susceptibility to ADHD, there may be modifiable environmental factors, including exposures to pesticides, that we should be examining in more detail,” says Richardson.

Young children and pregnant women may be more susceptible to pesticide exposure because their bodies do not metabolize the chemicals as quickly. This is why, Richardson says, human studies need to be conducted to determine how exposure affects the developing fetus and young children.

“We need to make sure these pesticides are being used correctly and not unduly expose those who may be at a higher risk,” Richardson says.

The researchers, including colleagues from Emory University, University of Rochester Medical Center, and Wake Forest University, report the findings in the Journal of the Federation of American Societies for Experimental Biology.

Source: Rutgers

The post ‘Safe’ pesticide could be an ADHD culprit appeared first on Futurity.

Donor tissue for joint repair stays fresh for 60 days

Thu, 01/29/2015 - 08:09

Currently doctors have to throw away more than 80 percent of donated tissue used for joint replacements because the tissue does not survive long enough to be transplanted.

A new way to preserve the tissue means it can last much longer: up to 60 days instead of less than 30.

“It’s a game-changer,” says James Stannard, a professor of orthopedic surgery at the University of Missouri School of Medicine. “The benefit to patients is that more graft material will be available and it will be of better quality. This will allow us as surgeons to provide a more natural joint repair option for our patients.”

The technology, called the Missouri Osteochondral Allograft Preservation System (MOPS), more than doubles the storage life of bone and cartilage grafts from organ donors compared to the current preservation method used by tissue banks.

In traditional preservation methods, donated tissues are stored within a medical-grade refrigeration unit in sealed bags filled with a standard preservation solution. MOPS utilizes a newly developed preservation solution and special containers designed by the research team that allows them to store tissues at room temperature.

Using MOPS, the storage time for donor tissue could be extended to at least 60 days, versus the current storage time of approximately 28 days.

“Time is a serious factor when it comes to utilizing donated tissue for joint repairs,” says study coauthor James Cook, director of the Comparative Orthopedic Laboratory and the Missouri Orthopedic Institute’s Division of Research. “With the traditional preservation approach, we only have about 28 days after obtaining the grafts from organ donors before the tissues are no longer useful for implantation into patients.

“Most of this 28-day window of time is used for testing the tissues to ensure they are safe for use. This decreases the opportunity to identify an appropriate recipient, schedule surgery and get the graft to the surgeon for implantation.”

Metal implants vs. tissue grafts

Stannard says that patients with metal and plastic implants often are forced to give up many of the activities they previously enjoyed in order to extend the life of their new mechanical joints.

Related Articles On Futurity

“For patients with joint problems caused by degenerative conditions, metal and plastic implants are still a very good option,” Stannard adds. “When the end of a bone that forms a joint is destroyed over time, the damage is often too extensive to use tissue grafts.

“However, for patients who experience trauma to a joint that was otherwise healthy before the injury, previous activity levels needn’t be drastically altered if we can replace the damaged area with living tissue.”

Donor tissue grafts have been used for many years as a way to fill in damaged areas of a joint, as an alternative to removing bone and implanting metal and plastic components. The body accepts bone and cartilage grafts without the need for anti-rejection drugs, and the donor tissue becomes part of the joint.

However, the method of preserving the grafts themselves has limited the amounts of quality donor tissue available to surgeons.

100% usable at 60 days

Additionally, because of testing requirements and logistics, only about 20 percent of grafts are ultimately useable because not enough live tissue cells remain in them after 28 days.

In contrast, the study found that the MOPS preservation system resulted in a 100 percent rate of usable tissue grafts at 60 days after procurement.

“With our new preservation technique, we can offer more patients a repair that allows their joints to respond to daily activities like they did when the joints were healthy,” Cook says. “Like a normal joint, the implanted tissue can renew itself, resulting in decreased physical limitations to the patient.”

The study was recently published in Clinical Orthopaedics and Related Research.

Source: University of Missouri

The post Donor tissue for joint repair stays fresh for 60 days appeared first on Futurity.

Diesel generators cut heat but spew emissions

Thu, 01/29/2015 - 07:44

A way to ease peak demand on the energy grid could help explain exceedingly high ozone concentrations in the Northeast region of the US.

A new study finds that using diesel backup generators in non-emergency situations triggers rising atmospheric ozone concentrations due to additional nitrogen oxide emissions.

During hazy, hot summer days, power systems in the Northeast experience close-to-capacity demand, putting pressure on the electricity grid. Peak electricity demand also leads to high emissions, especially nitrogen oxides, which are precursors to tropospheric ozone pollution.

“There is an ongoing debate over whether diesel emergency backup generators should be allowed to operate during peak electricity demand periods—non-emergency conditions—without proper emission regulations, and whether doing so will deteriorate air quality,” says lead author Max Zhang, associate professor of mechanical and aerospace engineering at Cornell University.

As the climate changes, peak demand for electric power becomes more frequent. “We typically see high temperature, high electricity demand, high electricity prices, and high pollution levels during those periods,” Zhang says.

In peak demand, power system operators and utility companies call consumers to reduce their demand for help in relieving the electric burden, known as demand response programs.

Behind-the-meter generators

Throughout the Northeast, industrial and commercial entities with diesel backup generators can fire them up under those non-emergency conditions.

Analyzing data from the demand response programs from power system operators to commercial entities with generators, Zhang and graduate student Xiyue Zhang (no relation) found that the emissions from diesel backup generators (called “behind-the-meter” generators in the power industry) very likely contribute to exceedingly high ozone concentrations in the Northeast region and account for a substantial amount of total nitrogen oxides emissions from electricity generation.

The emission rates from existing diesel backup generators are similar to or even exceed those from the highest emitting natural gas-fired generators.

Related Articles On Futurity

Behind-the-meter emissions on regional ozone pollution and near-source particulate matter (PM) pollution can be unintended consequences of demand-response programs, Zhang says. “There is a need to quantify the environmental impacts of demand-response programs in designing sound policies related to demand-side resources,” Zhang says.

One solution is for a “green” demand response, which includes curtailing the demand for electricity or having properly sited diesel generators with state-of-the-art control technologies, write the study’s authors.

Concurrently maintaining resource adequacy for power systems and reducing emissions, “green” demand response is key to achieving the grid’s reliability and protecting public health, the engineers say.

The study appears in the journal Environmental Science and Technology.

Support for the work came from the Consortium for Electric Reliability Technology Solutions and the New York State Energy Research and Development Authority.

Source: Cornell University

The post Diesel generators cut heat but spew emissions appeared first on Futurity.

Lead paint may still lurk on the porch

Thu, 01/29/2015 - 06:37

Housing regulations have been key to lowering rates of lead poisoning, but new research finds that porches may remain a danger to children’s health.

“This study shows that porches are an important potential source of lead exposure for children,” says study coauthor Katrina Korfmacher, director of the Community Outreach and Engagement Core of the University of Rochester Medical Center.

“It is becoming clear that porch dust lead can be effectively reduced through repairs, cleaning, and maintenance.”

Lead is a neurotoxin and has significant health, learning, and behavioral effects, even at levels previously thought to be safe. While federal, state, and municipal laws have contributed to a significant decline in the overall levels of childhood lead poisoning, rates remain high in some communities, particularly low income urban areas with older rental housing.

An estimated 19 percent of homes in the US contain lead paint hazards; this number rises to 35 percent in the homes of individuals below the poverty line. In Rochester, New York, where the study was conducted, more than 86 percent of the housing stock was constructed prior to the ban on lead paint in 1978.

Lead laws

Some local communities, including Rochester, have adopted ordinances that require owners and landlords to take steps to ensure that the interiors of rental properties are “lead safe.” However, in many instances these requirements stop at the front door and do not cover exterior spaces and structures such as porches.

No communities have standards limiting the amount of lead in dust on porches, because there is no federal standard and there has been limited evidence that mitigating lead hazards in these instances is feasible.

Porches hold the potential to be a source of lead hazards for young children, either from lead dust being tracked or blown into the house or through direct exposure. This is especially true in urban neighborhoods where porches often serve as the “front yard” where children play.

Porch floors

The researchers sampled lead paint levels on porches at 79 homes in Rochester that had recently undergone lead abatement. Before work began, the researchers found that porch floor dust lead levels were nearly four times greater than dust lead levels on interior floors. When dust lead levels were higher on the porches, lead dust levels were also higher on the interiors of homes.

After the porches were replaced or repainted, the porch dust lead levels significantly declined, indicating that property owners can effectively address the hazard.

The study also finds that when interiors were treated for lead paint but no work was done on the exterior, the porch dust lead levels rose immediately after work, most likely from workers tracking dust and debris onto the porches.

Related Articles On Futurity

These findings appear to indicate that steps taken to make the interior of homes more “lead safe” may inadvertently be causing the porches to become more hazardous.

“Without a porch standard, no one was held accountable for cleaning porches after interior renovations,” says lead author Jonathan Wilson, Acting Director of the National Center for Healthy Housing (NCHH).

“Lead on porches should be addressed and standards for porch lead dust must be adopted to protect children from inadequate clean up.”

The new study was a partnership among NCHH, the University of Rochester Medical Center, the City of Rochester, and Action for a Better Community, a Rochester-based non-profit organization.

The US Department of Housing and Urban Development provided funding for the work, which appears in the journal Environmental Health.

Source: University of Rochester

The post Lead paint may still lurk on the porch appeared first on Futurity.

Are hot flashes bad news for women’s hips?

Wed, 01/28/2015 - 13:39

Women who go through moderate to severe hot flashes and night sweats during menopause tend to have higher rates of hip fracture, according to a new study.

Women with these symptoms also tend to have lower bone mineral density than peers without menopausal symptoms.

The study followed thousands of women for eight years. After adjusting for age, body mass index, and demographic factors, it finds that women who reported moderate to severe hot flashes at baseline enrollment showed a significant reduction in the bone density in the femoral neck region of their hips over time and were nearly twice as likely to have a hip fracture during the follow-up period.

Jean Wactawski-Wende of the University at Buffalo School of Public Health and Health Professions says the research team examined data from 23,573 clinical trial participants, aged 50 to 79, who were not then using menopausal hormone therapy nor assigned to use it during the trial. They conducted baseline and follow-up bone density examinations in 4,867 of these women.

Double the risk of hip fracture

“We knew that during menopause, about 60 percent of women experience vasomotor symptoms (VMS), such as hot flashes and night sweats. They are among the most bothersome symptoms of menopause and can last for many years,” says Wactawski-Wende.

“It also was known that osteoporosis, a condition in which bones become structurally weak and more likely to break, afflicts 30 percent of all postmenopausal women in the United States and Europe, and that at least 40 percent of that group will sustain one or more fragility fractures in their remaining lifetime,” she says.

“What we did not know,” says Wactawski-Wende, “was whether VMS are associated with reductions in bone mineral density or increased fracture incidence.

“Women who experience vasomotor menopausal symptoms will lose bone density at a faster rate and nearly double their risk of hip fracture,” she says, “and the serious public health risk this poses is underscored by previous research that found an initial fracture poses an 86 percent risk for a second new fracture.

“Clearly more research is needed to understand the relationship between menopausal symptoms and bone health. In the meantime, women at risk of fracture may want to engage in behaviors that protect their bones including increasing their physical activity and ensuring they have adequate intakes of calcium and vitamin D.”

Related Articles On Futurity

The prospective observational study appears online in the Journal of Clinical Endocrinology & Metabolism.

This study used data and study participants from the Women’s Health Initiative (WHI) initiated by the US National Institutes of Health (NIH) in 1991 to address major health issues causing morbidity and mortality in postmenopausal women.

The WHI consisted of three clinical trials and an observational study undertaken at 40 clinical centers throughout the US, including the University at Buffalo Clinical Center directed by Wactawski-Wende.

Wactawski-Wende is a professor in the department of epidemiology and environmental health, as well as the department of obstetrics and gynecology in the UB School of Medicine and Biomedical Sciences.

Additional coauthors of the study contributed from UCLA, the Fred Hutchinson Cancer Research Center; University of Pittsburgh; Brigham and Women’s Hospital and Harvard Medical School; Kaiser Permanente; University of Miami’s Miller School of Medicine; Wake Forest School of Medicine; and Mercy Health and Osteoporosis and Bone Health Services.

Source: University at Buffalo

The post Are hot flashes bad news for women’s hips? appeared first on Futurity.

Big storms can make or break politicians

Wed, 01/28/2015 - 13:10

Why were preparations for “Winter Storm Juno” so intense? Politics, says Andrew Reeves, a political scientist who studies the politics of natural disasters.

“The current snow storm is providing an unexpected challenge to mayors, governors, and other state and local officials throughout the mid-Atlantic and New England,” he says.

“Not only does a major snow storm launch an unexpected stress test on already strained budgets, it lets us observe leaders reacting to unexpected crises without much lead time.”

Describing the big snow storm as a “pop quiz” in leadership for politicians, Reeves notes that then-Massachusetts Gov. Michael Dukakis received high marks for his handling of the Blizzard of 1978, whereas the late Washington, DC, Mayor Marion Barry was ridiculed for partying at the 1987 Super Bowl while his constituents were digging out from a massive, two-foot snow fall.

“A fair test or not, these unexpected severe weather calamities have provided the low points for some—and launching pads for others—in their political careers,” says Reeves, assistant professor of political science at Washington University in St. Louis and a research fellow at the Weidenbaum Center on the Economy, Government, and Public Policy.

“Given the past voter backlash against responses that are (or at least appear) inept, it’s of little surprise that (New York City) Mayor (Bill) De Blasio took an aggressive response even though the storm didn’t live up to the initial forecasts for New York,” he says.

Disaster relief

Reeves’ research looks at how voters hold presidents and state governors accountable for the decisions they make while in office, including decisions to provide emergency funding to assist local states and counties with economic relief from natural disasters, such as the megastorm that just lashed the nation’s Eastern seaboard.

His forthcoming book, The Particularistic President: Executive Branch Politics and Political Inequality (Cambridge University Press, 2015), examines how local accountability, combined with the institutions of presidential elections, causes presidents to disproportionately reward important constituencies with federal dollars, including the declaration of disaster relief.

In the book, Reeves and coauthor Douglas Kriner of Boston University argue that presidents, like members of Congress, are particularistic—they routinely pursue policies that allocate federal resources in a way that disproportionately benefits their more narrow partisan and electoral constituencies.

“Just by virtue of being a hotly contested electoral battleground, a state can expect to receive twice as many disaster declarations as it would if it wasn’t in play during the presidential election,” Reeves says.

Predicting disaster declarations

In a current working paper, Reeves examines what motivates governors as they interact with the president in requesting aid for a natural disaster.

Related Articles On Futurity

His monthly analysis of disaster declaration requests from 1972-2006 finds that governors from swing states request disaster aid above and beyond the amounts suggested by actual need.

But this is only true, he finds, for governors who are not term-limited and can run again. He finds no evidence of partisan effects—governors from battleground states request help without hesitation from other-party presidents even at election time.

The study does find that election-seeking governors contribute to the politicization of disaster aid.

“The best predictor of a presidential disaster declaration, bar none, is actual need,” Reeves says. “The question (of politics) arises in these marginal cases, when it’s unclear whether to give or not.”

Federal disaster relief is a substantial part of the federal budget, with one study estimating that Congress spent at least $136 billion from 2011-13, or about $400 per household per year.

“What is perhaps more important,” Reeves says, “is that it can quickly become the most important thing in a voter’s life.”

Source: Washington University in St. Louis

The post Big storms can make or break politicians appeared first on Futurity.

Are tiny crystals the next big thing in solar cells?

Wed, 01/28/2015 - 12:49

Tomorrow’s solar cells will likely be made of nanocrystals.

Compared with silicon in today’s solar cells, these tiny crystals can absorb a larger fraction of the solar light spectrum. But, until now, the physics of electron transport in this complex material was not understood, making it impossible to systematically engineer better nanocrystal-composites.

“These solar cells contain layers of many individual nano-sized crystals, bound together by a molecular glue. Within this nanocrystal composite, the electrons do not flow as well as needed for commercial applications,” explains Vanessa Wood, a professor of materials and device engineering at ETH Zurich.

Wood and her colleagues conducted an extensive study of nanocrystal solar cells, which they fabricated and characterized in their laboratories. They were able to describe the electron transport in these types of cells via a generally applicable physical model for the first time.

“Our model is able to explain the impact of changing nanocrystal size, nanocrystal material, or binder molecules on electron transport,” says Wood.

Optimized for solar cells

The model will give scientists in the research field a better understanding of the physical processes inside nanocrystal solar cells and enable them to improve solar cell efficiency.

One reason scientists are excited about nanocrystals is that their physical properties vary at different sizes. And because scientists can easily control nanocrystal size in the fabrication process, they are able to optimize them for solar cells.

Related Articles On Futurity

One such property that can be influenced by changing nanocrystal size is the amount of sun’s spectrum that can be absorbed by the nanocrystals and converted to electricity by the solar cell.

Semiconductors do not absorb the entire sunlight spectrum, but rather only radiation below a certain wavelength.

In most semiconductors, this threshold can only be changed by changing the material. However, for nanocrystal composites, the threshold can be changed simply by changing the size of the individual crystals. That means scientists can select the size of nanocrystals in such a way that they absorb the maximum amount of light from a broad range of the sunlight spectrum.

An additional advantage is that nanocrystal semiconductors absorb much more sunlight than traditional semiconductors. For example, the absorption coefficient of lead sulfide nanocrystals, used by the ETH researchers in their experimental work, is several orders of magnitude greater than that of silicon semiconductors, used traditionally as solar cells.

A relatively small amount of material is sufficient for the production of nanocrystal solar cells, making it possible to make very thin, flexible solar cells.

The work is described in Nature Communications.

Source: ETH Zurich

The post Are tiny crystals the next big thing in solar cells? appeared first on Futurity.

These rings are 200 times bigger than Saturn’s

Wed, 01/28/2015 - 12:18

Scientists have discovered what appears to be a young giant exoplanet with an enormous ring system—much larger and heavier than the system around Saturn.

The rings around J1407b are so large that if they were put around Saturn, we could see the rings at dusk with our own eyes. (Credit: M. Kenworthy/Leiden)

The 30 or more rings around J1407b are each tens of millions of kilometers in diameter. If these rings were around Saturn, they would be visible at night from Earth.

“You could think of it as kind of a super Saturn,” says Eric Mamajek, professor of physics and astronomy at the University of Rochester.

He led the team that reported the discovery of the young star J1407 in 2012—and unusual eclipses, which they proposed were caused by a moon-forming disk around a young giant planet or brown dwarf.

A new analysis of the data, led by Matthew Kenworthy of the Leiden Observatory in the Netherlands, revealed the enormous size of the ring system.

“The details that we see in the light curve are incredible. The eclipse lasted for several weeks, but you see rapid changes on time scales of tens of minutes as a result of fine structures in the rings,” says Kenworthy.

“The star is much too far away to observe the rings directly, but we could make a detailed model based on the rapid brightness variations in the star light passing through the ring system.”

Rings filled with dust

The astronomers analyzed data from the SuperWASP project—a survey that is designed to detect gas giants that move in front of their parent star.

In a recent study also led by Kenworthy, astronomers used adaptive optics and Doppler spectroscopy to estimate the mass of the ringed object.

The light curve tells astronomers that the diameter of the ring system is nearly 120 million kilometers, more than two hundred times as large as the rings of Saturn. The ring system likely contains roughly an Earth’s worth of mass in light-obscuring dust particles.

Mamajek puts into context how much material is contained in these disks and rings.

“If you were to grind up the four large Galilean moons of Jupiter into dust and ice and spread out the material over their orbits in a ring around Jupiter, the ring would be so opaque to light that a distant observer that saw the ring pass in front of the sun would see a very deep, multi-day eclipse,” Mamajek says.

“In the case of J1407, we see the rings blocking as much as 95 percent of the light of this young Sun-like star for days, so there is a lot of material there that could then form satellites.”

What’s in the gap?

In the data the astronomers found at least one clean gap in the ring structure, which is more clearly defined in the new model.

Related Articles On Futurity

“One obvious explanation is that a satellite formed and carved out this gap,” says Kenworthy. “The mass of the satellite could be between that of Earth and Mars. The satellite would have an orbital period of approximately two years around J1407b.”

Astronomers expect that the rings will become thinner in the next several million years and eventually disappear as satellites form from the material in the disks.

“The planetary science community has theorized for decades that planets like Jupiter and Saturn would have had, at an early stage, disks around them that then led to the formation of satellites,” Mamajek explains. “However, until we discovered this object in 2012, no-one had seen such a ring system.

Astronomers estimate that the planet has an orbital period roughly a decade in length. The mass of J1407b has been difficult to constrain, but it is most likely in the range of about 10 to 40 Jupiter masses.

Source: University of Rochester

The post These rings are 200 times bigger than Saturn’s appeared first on Futurity.

Why we need satire when times are tough

Wed, 01/28/2015 - 09:43

Satire isn’t just entertainment, according to the authors of a new book. It’s a vital function of democratic society and a way to broach taboo subjects, especially in times of crisis.

“Robust satire is often a sign of crisis and the ability to share and consume it is a sign of a free society,” says Sophia McClennen, professor of international affairs and comparative literature and director of Penn State’s Center for Global Studies.

“We see satire emerge when political discourse is in crisis and when it becomes important to use satirical comedy to put political pressure on misinformation, folly, and the abuse of power.”

McClennen and Remy Maisel, a recent Penn State undergraduate in media studies, trace the use of satire as an American form of political engagement from the country’s colonial era to the present high-tech, multimedia satire in Is Satire Saving Our Nation? (Palgrave 2014).

From Twain to Colbert

Satirical cartoons—especially ones that painted King George as a buffoon—flourished in America before and during the Revolutionary War. Benjamin Franklin and Mark Twain, two of America’s favorite writers, were also both excellent satirists.

“The Founders didn’t just enjoy humor—they believed it was politically important,” the authors write. “And so they employed the pen and the sword, using satirical works as ‘weapons in a literary and ideological war to decide the future of the new Republic.'”

Shortly after the 9/11 terrorist attacks, television satirists, such as Jon Stewart, who hosts the Daily Show, and Stephen Colbert, who hosted the Colbert Report, were among the few critical voices speaking out about the US government and its policies, according to the researchers.

While satire has always been part of the nation’s political landscape, technology is changing who creates satire and how it is accessed, according to the researchers.

TV and Twitter

Unlike Franklin and Twain, current satirists are not necessarily professional writers or journalists. Satirists increasingly belong to a generation of Americans born from the early 1980s to early 2000s—often referred to as millennials—and are using social media like Twitter to spread satire.

In another major change, millennials are relying on Stewart, Colbert, and other television satirists, not just for a source of amusement, but as sources of news and information.

There are also differences in the ways comedians approach satire on television, says Maisel.

“Stewart generally breaks down the news from the mainstream media, but Colbert not only breaks the news down, he also gets to add parody as an extra satirical layer because he is coming at the material in the character of a right wing pundit,” says Maisel.

Sparking debate

Many citizens do not “get” satire, or its larger purpose, and criticize it for being, at best, entertainment, and, at worst, mockery and ridicule, according to the researchers. The goal of good satire is not mockery, but to generate debate and conversation about subjects.

Related Articles On Futurity

“The point is this—and it has to be emphasized again and again—satire only reminds us of the sad state of affairs; it doesn’t create it, it can’t mock what doesn’t exist,” the authors write.

“But, as we’ve explained, satire’s goal is not demoralizing mockery; its goal is to invigorate public debate, encourage critical thinking, and call on citizens to question the status quo.”

Maisel, who graduated from Penn State in fall 2014, began collaborating with McClennen as an undergraduate student when the two discovered they were both blogging about satire.

“Remy didn’t just help me write the book, this was a real collaboration,” McClennen says. ” I wanted a millennial to work on this and, in our case, Remy brought her own originality and creative ideas, which led to a diversity of experience and ways of thinking about the research.”

Source: Penn State

The post Why we need satire when times are tough appeared first on Futurity.

Astronomers watch black hole choke on a star ‘blob’

Wed, 01/28/2015 - 07:40

A five-year analysis of an event captured first by a tiny telescope at McDonald Observatory and followed up by telescopes on the ground and in space has led astronomers to believe they witnessed a giant black hole tear apart a star.

On January 21, 2009, the ROTSE IIIb telescope at McDonald caught the flash of an extremely bright event. The telescope’s wide field of view takes pictures of large swathes of sky every night, looking for newly exploding stars as part of the ROTSE Supernova Verification Project (RSVP). Software then compares successive photos to find bright “new” objects in the sky—transient events such as the explosion of a star or a gamma-ray burst.

With a magnitude of -22.5, this 2009 event was as bright as the “superluminous supernovae” (a new category of the brightest stellar explosions known) that the ROTSE team discovered at McDonald in recent years. The team nicknamed the 2009 event “Dougie,” after a character in the cartoon “South Park.” (Its technical name is ROTSE3J120847.9+430121.)

The team thought Dougie might be a supernova and set about looking for its host galaxy, which would be too faint for ROTSE to see. They found that a sky survey had mapped a faint red galaxy at Dougie’s location. They used one of the giant Keck telescopes in Hawaii to pinpoint its distance: 3 billion light-years.

What was Dougie?

These deductions meant Dougie had a home—but just what was he? To narrow it down from four possibilities, they studied Dougie with the orbiting Swift telescope and the giant Hobby-Eberly Telescope at McDonald, and they made computer models.

These models showed how Dougie’s light would behave if created by different physical processes. The astronomers then compared the different theoretical Dougies to their telescope observations of the real thing.

“When we discovered this new object, it looked similar to supernovae we had known already,” says lead author Jozsef Vinko of the University of Szeged in Hungary. “But when we kept monitoring its light variation, we realized that this was something nobody really saw before.”

Team member J. Craig Wheeler, leader of the supernova group at the University of Texas at Austin, says they got the idea they might be witnessing a “tidal disruption event,” in which the enormous gravity of a black hole pulls on one side of a star harder than the other side, creating tides that rip the star apart.

“These especially large tides can be strong enough that you pull the star out into a noodle” shape, says Wheeler. The star “doesn’t fall directly into the black hole,” Wheeler explains. “It might form a disk first. But the black hole is destined to swallow most of that material.”

The black hole was choking

Astronomers have seen black holes swallow stars about a dozen times before, but this one is special even in that rare company: it’s not going down easily.

Models by team members James Guillochon of Harvard University and Enrico Ramirez-Ruiz of the University of California at Santa Cruz showed that the disrupted stellar matter was generating so much radiation that it pushed back on the infall. The black hole was choking on the rapidly infalling matter.

Related Articles On Futurity

Based on the characteristics of the light from Dougie and their deductions of the star’s original mass, the team has determined that Dougie started out as a star like our sun before being ripped apart.

Their observations of the host galaxy, coupled with Dougie’s behavior, led them to surmise that the galaxy’s central black hole has the “rather modest” mass of about a million suns, Wheeler says.

Delving into Dougie’s behavior has unexpectedly resulted in learning more about small, distant galaxies, Wheeler says, musing “Who knew this little guy had a black hole?”

The work is published this month in the Astrophysical Journal. The paper’s lead author, Joszef Vinko, began the project while on sabbatical at the University of Texas at Austin. The team also includes Robert Quimby of San Diego State University, who started the search for supernovae using ROTSE IIIb and discovered the category of superluminous supernovae while a graduate student at the University of Texas at Austin.

Source: UT Austin

The post Astronomers watch black hole choke on a star ‘blob’ appeared first on Futurity.

These 2 questions found hard evidence of love

Wed, 01/28/2015 - 07:17

A new study finds quantitative evidence of love—something very few economic studies have ever claimed.

The researchers asked married couples two penetrating questions about the quality of their marriage, and combined those responses with the couples’ divorce rates six years later.

The questions are from the long-term National Survey of Families and Households, administered by the University of Wisconsin:

  • How happy are you in your marriage relative to how happy you would be if you weren’t in the marriage? [Much worse; worse; same; better; much better.]
  • How do you think your spouse answered that question?

“The idea of love here is that you get some happiness from your spouse simply being happy”

The study, published in the International Economic Review, examines how 4,242 households answered those questions in a 1987-88 wave of the survey, and then again roughly six years later, on average, for the 1992-94 wave.

Only 40.9 percent couples accurately identified how their spouse would answer the question.

So, almost 60 percent of couples had imperfect (asymmetric) information about each other, and roughly a quarter of those had “serious” discrepancies in overall happiness (differing by more than one response category), note the study’s authors, Leora Friedberg and Steven Stern, both professors in the University of Virginia economics department.

Bargaining theory

According to bargaining theory, the more that one spouse misjudges his or her partner’s happiness (particularly by overestimating), the more likely he or she will bargain “too hard” and make a mistake.

As an example, Stern explains, “If I believe my wife is really happy in the marriage, I might push her to do more chores or contribute a larger portion of the family income. If, unbeknownst to me, she’s actually just lukewarm about the marriage, or she’s got a really good-looking guy who is interested in her, she may decide those demands are the last straw, and decide a divorce would be a better option for her.”

In this scenario, pushing a bargain too hard, based on misperception of a spouse’s happiness (information asymmetry), will result in a divorce that wouldn’t otherwise have occurred.

How happy is your spouse?

Among these 4,242 couples, the data had the general shape predicted by bargaining theory. Divorce rates increased in strong linear correlation with couples’ reported unhappiness with the marriage, and with spouses overestimating their partners’ happiness—two strong indications that the answers were very sincere and accurate, Stern says.

While the average observed divorce rate was 7.3 percent, the rate was higher for couples in which one spouse overestimated how unhappy the other spouse would be if they separated, at 9 percent to 11.7 percent, and even higher if the misperception was serious (with answers differing by more than one response category), at 13.1 percent to 14.5 percent.

Among those couples at the opposite end of the spectrum, in which both couples said they would be “worse” or “much worse” off if they separated, the divorce rate was substantially lower—only 4.8 percent.

While the general trend of divorce rates was consistent with bargaining theory, among spouses who misjudged each other’s happiness in the marriage, bargaining theory predicted a divorce rate much higher than it actually was. What would explain this? That’s where love comes in.

‘We needed to include caring’

“We started out trying to explain the findings by modeling bargaining between the spouses,” Friedberg says. “This data shows that people aren’t being as tough negotiators as they could be, and then we realized that we needed to include caring in the model for it to make sense.”

With that observation, Friedberg and Stern put themselves among a very small group of economists in history to have plausibly identified evidence of love in the real world.

“The idea of love here is that you get some happiness from your spouse simply being happy,” Friedberg says. “For instance, I might agree to do more house chores, which reduces my personal happiness somewhat, but I get some offsetting happiness simply knowing that my partner benefits.”

Economists are always looking for people to reveal their preferences through action rather than simply reporting their attitudes, Stern notes. This set of questions provides couples’ reported attitudes toward each other along with their revealed preferences: whether they are divorced or together six years later.

“These two questions are pretty unique in the whole social science literature,” Stern says. “Combined with the revealed preferences of divorce rates six years later, that’s what really makes them powerful.”

Friedberg adds, “These two questions seem to have revealed something fairly profound, something no other surveys have uncovered.”

Public policy and divorce

Friedberg and Stern realized their modeling could address one more issue. With imperfect information about each other, the couples must be making some bargaining mistakes, causing unnecessary divorces by bargaining too hard.

The “optimum allocation” of divorces would generate the most total happiness for all parties. What would that look like? And could any sort of public policy based on public information (i.e. the best possible public policy) nudge the total population closer to the optimal allocation of divorces?

As it turns out, the caring behind not-so-hard bargaining leads to overall divorce rates that are actually fairly close—just slightly higher—than the optimal divorce allocation. And there is no observable characteristic or quality recorded by this survey—such as couples’ age differences, education differential, income differential, household chore effort, etc.—that a policy could be based upon to generate a more optimal level of divorces.

“With any given set of observables, some couples will have good marriages and others will have bad marriages,” explains Stern.

“Any public policy will be based on an average marriage observation, which can’t see things like how much the couple are fighting; whether they have the same long-term interests; whether one of the two are really in love with someone else; or how much each spouse values simply staying together, which would make divorce more painful.

Related Articles On Futurity

“All of those things should matter. The government can’t create policy based on those things, because it can’t see them.”

As a result, couples on their own are substantially better at deciding when to divorce or not divorce than any policy could be.

Many US states have altered their divorce laws since 1970 in ways that reduce the cost of divorce, and in recent years many leaders have proposed policies to make divorce more difficult in order to reduce the divorce rate.

“In this study, we demonstrate why making divorce more difficult is not a good idea,” Stern says, and how, due to caring about each other, couples already are selecting divorces in a way that is quite close to optimal.

Source: University of Virginia

The post These 2 questions found hard evidence of love appeared first on Futurity.

Six different scans can ‘see’ this nanoparticle

Tue, 01/27/2015 - 13:31

Six different medical imaging techniques can detect a new type of nanoparticle. This means that, in the future, patients could receive a single injection of the nanoparticles to have all six types of imaging done.

The six types of imaging are:

  • computed tomography (CT) scanning;
  • positron emission tomography (PET) scanning;
  • photoacoustic imaging;
  • fluorescence imaging;
  • upconversion imaging;
  • Cerenkov luminescence imaging.
Six in one?

This kind of “hypermodal” imaging—if it came to fruition—would give doctors a much clearer picture of patients’ organs and tissues than a single method alone could provide. It could help medical professionals diagnose disease and identify the boundaries of tumors.

“This nanoparticle may open the door for new ‘hypermodal’ imaging systems that allow a lot of new information to be obtained using just one contrast agent,” says researcher Jonathan Lovell, assistant professor of biomedical engineering at the University at Buffalo.

“Once such systems are developed, a patient could theoretically go in for one scan with one machine instead of multiple scans with multiple machines.”

When Lovell and colleagues used the nanoparticles to examine the lymph nodes of mice, they found that CT and PET scans provided the deepest tissue penetration, while the photoacoustic imaging showed blood vessel details that the first two techniques missed.

Differences like these mean doctors can get a much clearer picture of what’s happening inside the body by merging the results of multiple modalities.

A machine capable of performing all six imaging techniques at once has not yet been invented, to Lovell’s knowledge, but he and his coauthors hope that discoveries like theirs will spur development of such technology.

Core and shell

The researchers designed the nanoparticles from two components: An “upconversion” core that glows blue when struck by near-infrared light, and an outer fabric of porphyrin-phospholipids (PoP) that wraps around the core.

Each part has unique characteristics that make it ideal for certain types of imaging.

The core, initially designed for upconversion imaging, is made from sodium, ytterbium, fluorine, yttrium and thulium. The ytterbium is dense in electrons—a property that facilitates detection by CT scans.

The PoP wrapper has biophotonic qualities that make it a great match for fluorescence and photoacoustic imagining. The PoP layer also is adept at attracting copper, which is used in PET and Cerenkov luminescence imaging.

“Combining these two biocompatible components into a single nanoparticle could give tomorrow’s doctors a powerful, new tool for medical imaging,” says Paras Prasad, executive director of the university’s Institute for Lasers, Photonics and Biophotonics (ILPB), and also a professor of chemistry, physics, medicine, and electrical engineering.

“More studies would have to be done to determine whether the nanoparticle is safe to use for such purposes, but it does not contain toxic metals such as cadmium that are known to pose potential risks and found in some other nanoparticles.”

Related Articles On Futurity

“Another advantage of this core/shell imaging contrast agent is that it could enable biomedical imaging at multiple scales, from single-molecule to cell imaging, as well as from vascular and organ imaging to whole-body bioimaging,” adds Guanying Chen, a researcher at ILPB and Harbin Institute of Technology in China.

“These broad, potential capabilities are due to a plurality of optical, photoacoustic, and radionuclide imaging abilities that the agent possesses.”

Lovell says the next step in the research is to explore additional uses for the technology.

For example, it might be possible to attach a targeting molecule to the PoP surface that would enable cancer cells to take up the particles, something that photoacoustic and fluorescence imaging can detect due to the properties of the smart PoP coating.

This would enable doctors to better see where tumors begin and end, Lovell says.

The research appears online in the journal Advanced Materials.

Source: University at Buffalo

The post Six different scans can ‘see’ this nanoparticle appeared first on Futurity.


« Back