For patients with stable coronary artery disease (CAD) who are not experiencing a heart attack and an abnormal stress test, angioplasty may not offer more benefits than drug therapy alone.
For a new study, researchers surveyed more than 4,000 patients with myocardial ischemia, or inadequate circulation. The study combined data from clinical trials performed between 1970 and 2012 of patients who had either percutaneous coronary intervention (PCI), or angioplasty, plus drug therapy, or drug therapy alone to treat their CAD.Related Articles On Futurity
- Stem cells to blood vessels in two weeks
- University of NottinghamSystem gauges heart disease risk
- University of Southern CaliforniaWhy meat-eating humans outlive apes
Each of the clinical studies within the analysis reported outcomes of death and nonfatal myocardial infarction. Additionally, to reflect contemporary medical and interventional practice, inclusion criteria required stent implantation in at least 50 percent of the PCI procedures and statin medications to lower cholesterol in at least 50 percent of patients in both the PCI and drug therapy alone groups.
This led to a total of five clinical trials yielding 4,064 patients with myocardial ischemia diagnosed by exercise stress testing, nuclear or echocardiocraphic stress imaging, or fractional flow reserve.
The researchers reviewed outcome data up to five years post PCI or drug treatment alone. They analyzed all-cause death, non-fatal myocardial infarction, unplanned revascularization, and angina in the patients.
Published in JAMA Internal Medicine, the analysis showed all-cause death rates between the two groups was not significantly different—6.5 percent for patients receiving PCI and drug therapy versus 7.3 percent for patients receiving drug therapy alone.Little difference
There was little difference in the rates of non-fatal myocardial infarction (9.2 percent with PCI vs. 7.6 percent drug therapy) and recurrent or persistent angina (20.3 percent vs. 23.3). The rate of unplanned revascularization was slightly different but not statistically significant (18.3 percent vs. 28.4 percent).
“If our findings are confirmed in ongoing trials, many of the more than 10 million stress tests performed annually and subsequent revascularizations may be unnecessary,” says David Brown, professor of medicine in the division of cardiovascular medicine at Stony Brook University.
Brown cautions that additional studies beyond data analyses of clinical trials are necessary to fully determine if practices with PCI in stable CAD patients needs to be re-evaluated, and if so, under what circumstances and in which patient populations.
Source: Stony Brook University
A new scheme lets people create 100 or more passwords by remembering—and regularly rehearsing—a few one-sentence stories.
The story sentences become the basis for password fragments that are randomly combined to create unique, strong passwords for multiple accounts.
The scheme ensures that people remember these sentences by pairing them with photos, which serve as mnemonic devices, and by making sure that people either use or rehearse these sentences frequently enough to keep their memories fresh.
These “naturally rehearsing passwords” require a bit more work for the user at the outset than existing password practices, acknowledges Jeremiah Blocki, a Ph.D. student in Carnegie Mellon University’s computer science department.
“But if you can memorize nine stories, our system can generate distinct passwords for 126 accounts,” Blocki says. By memorizing more stories, users can create even more passwords or can make their passwords even more secure. And by reusing and recombining those stories for each password, people naturally rehearse them more often and thus remember them better.Related Articles On Futurity
- University College LondonSmoking potent pot fries memory
- University of MichiganSupercomputer built to 'run' like a cat
- Lifelong exercise may keep aging mind sharp
Blocki and his collaborators say the scheme addresses a major usability and security problem posed by the Internet’s reliance on passwords. Even casual Internet users accumulate so many passwords that they are difficult or impossible to remember. As a result, too many people simply use the same password over and over, or write down their passwords or use other shortcuts that leave their accounts vulnerable to attackers.
Rather than require websites to revise password practices, the researchers have created an application that helps prompt the memory of users. It is in the process of being implemented as a mobile app as part of an undergraduate research project.
Team member Manuel Blum, professor of computer science, says they based their approach on cognitive research on the relationship between memory retention and the frequency at which those memories are rehearsed.
They also drew inspiration from Moonwalking with Einstein, a 2011 bestseller in which author Joshua Foer recounts his experiences in the world of competitive memorization. In particular, they borrowed the concept of the Person-Action-Object, or PAO, System, in which long sequences of numbers or letters are memorized by associating them with images.How it works
In their scheme, a user initially selects a photo of a person and a photo of an evocative scene; the computer then randomly selects a photo of an object and a photo of an action. With those photos, the user then creates a PAO story that is as vivid and unusual as possible.
For instance, photos of President Bill Clinton, a piranha, and someone kissing might result in a story, “Bill Clinton kissing a piranha,” or “President smooches a fish.” By taking the first letter from each word, or the first three letters from the first two words, the user could generate part of a password.
For each account, the application would randomly assign several such photo combinations and the user would create a password using the letters associated with each photo. During subsequent logons, the application would provide the photos as a memory prompt; even if the user forgets the password, he or she can reconstruct the password by looking at the photos and recalling the associated story.
Though the photos could be public and unprotected, only the user would know the exact stories associated with each and the ways they are translated into passwords.
Team member Anupam Datta, associate professor of computer science and electrical and computer engineering, says that even if an attacker discovered one complete password, it wouldn’t compromise any other passwords.
The application would keep track of the time intervals between uses of each photo/story pair. Blocki says cognitive research suggests that as memories are created, a person may initially need to rehearse the story every day or two; over time, the intervals can grow much longer.
If a person didn’t see a photo compilation and rehearse the associated story within the appropriate interval, the application would prompt the person to rehearse it. Over time, however, as the memory becomes consolidated, normal password use likely will give users all of the rehearsal time necessary, he adds.Password security
Blocki says users might have as few as nine photo/story pairs, though he personally has opted to use 43 stories to maintain greater security.
“The most annoying thing about using the system isn’t remembering the stories, but the password restrictions of some sites,” says Blocki, who notes that some sites, for instance, require use of numbers, figures, or capital letters in passwords, or have maximum character counts.
“In those cases, I just make a note to, for instance, add a ’1′ to the password,” he says.
Writing down password information normally is a bad practice, but Blocki says these notes aren’t a problem with naturally rehearsing passwords. “The security is inherent in the passwords themselves,” he explains, “and the notes don’t affect that.”
Blocki presented the work on December 5 at ASIACRYPT 2013, a scientific conference on cryptology in Bangalore, India.
The National Science Foundation and the Air Force Office of Scientific Research supported the research.
Source: Carnegie Mellon University
Chemists have discovered how to reduce three kinds of coal into graphene quantum dots (GQDs) that could be used for medical imaging as well as sensing, electronic, and photovoltaic applications.
Band gaps determine how a semiconducting material carries an electric current. In quantum dots, microscopic discs of atom-thick graphene oxide, band gaps are responsible for their fluorescence and can be tuned by changing the dots’ size.Related Articles On Futurity
- University of IllinoisWhy graphene is electronic gold
- Stanford UniversityGive graphene a squeeze to get electricity
- Stony Brook UniversitySuper carbon denser than diamonds
The new process, described in the journal Nature Communications, allows a measure of control over their size, generally from 2 to 20 nanometers, depending on the source of the coal.
There are many ways to make GQDs now, but most are expensive and produce very small quantities, says James Tour, chair in chemistry and professor of mechanical engineering and materials science and of computer science at Rice University.
Earlier research found a way last year to make GQDs from relatively cheap carbon fiber, but coal promises greater quantities of GQDs made even cheaper in one chemical step.
“We wanted to see what’s there in coal that might be interesting, so we put it through a very simple oxidation procedure,” Tour says. That involved crushing the coal and bathing it in acid solutions to break the bonds that hold the tiny graphene domains together. “You can’t just take a piece of graphene and easily chop it up this small.”Different coal, different dots
Tour worked with co-author Angel Martí, assistant professor of chemistry and bioengineering, to characterize the product. It turns out different types of coal produce different types of dots. GQDs were derived from bituminous coal, anthracite, and coke, a byproduct of oil refining.
The coals were each sonicated in nitric and sulfuric acids and heated for 24 hours. Bituminous coal produced GQDs between 2 and 4 nanometers wide. Coke produced GQDs between 4 and 8 nanometers, and anthracite made stacked structures from 18 to 40 nanometers, with small round layers atop larger, thinner layers. (Just to see what would happen, the researchers treated graphite flakes with the same process and got mostly smaller graphite flakes.)
The dots are water-soluble, and early tests have shown them to be nontoxic, offering the promise that GQDs may serve as effective antioxidants, Tour says.
Medical imaging could also benefit greatly, as the dots show robust performance as fluorescent agents.Quantum dots resist bleaching
“One of the problems with standard probes in fluorescent spectroscopy is that when you load them into a cell and hit them with high-powered lasers, you see them for a fraction of a second to upwards of a few seconds, and that’s it,” Martí says. “They’re still there, but they have been photo-bleached. They don’t fluoresce anymore.”
Testing in the Martí lab showed GQDs resist bleaching. After hours of excitation, the photoluminescent response of the coal-sourced GQDs was barely affected. That could make them suitable for use in living organisms. “Because they’re so stable, they could theoretically make imaging more efficient,” he says.
A small change in the size of a quantum dot—as little as a fraction of a nanometer—changes its fluorescent wavelengths by a measurable factor, and that proved true for the coal-sourced GQDs, Martí says.
Low cost will also be a draw, Tour says.
“Graphite is $2,000 a ton for the best there is, from the UK. Cheaper graphite is $800 a ton from China. And coal is $10 to $60 a ton.
“Coal is the cheapest material you can get for producing GQDs, and we found we can get a 20 percent yield. So this discovery can really change the quantum dot industry. It’s going to show the world that inside of coal are these very interesting structures that have real value.”
The Air Force Office of Scientific Research and the Office of Naval Research funded the work through their Multidisciplinary University Research Initiatives.
Source: Rice University
A new study shows that cancer is a likely cause of scleroderma, an autoimmune disease that thickens and hardens skin, and causes widespread organ damage.
The findings, published in Science, also suggest that a normal immune system is critical for preventing common types of cancer.
“Our study results could change the way many physicians evaluate and eventually treat autoimmune diseases like scleroderma,” says Antony Rosen, director of rheumatology at Johns Hopkins University School of Medicine. “Current treatment strategies that are focused on dampening down the immune response in scleroderma could instead be replaced by strategies aimed at finding, diagnosing, and treating the underlying cancer.”Related Articles On Futurity
- University of California, DavisShiitake-soy blend tested as prostate cancer therapy
- University of California, DavisOmega-3 fatty acid may slow tumors
- University of North Carolina at Chapel HillBlock protein to curb melanoma's spread
Rosen says his team’s findings should spur research into possible cancerous origins for other autoimmune diseases, including lupus and myositis.
The causes of autoimmune disease are largely unproven, Rosen says. Scientists have speculated that infections, chemical exposures, and inherited genes could be triggers, although hard evidence is lacking. None of those explains scleroderma, which is estimated to afflict as many as 300,000 Americans of all ages, but is not an inherited disease.
The immune systems of patients with scleroderma often make antibodies to a protein called RPC1. These antibodies are believed to cause the organ damage characteristic of the disease, but it has not been clear why the antibodies are produced.Gene mutation
Rosen’s team has now demonstrated that cancers from a majority of patients with severe scleroderma have a mutation in a gene called POLR3A, responsible for producing RPC1. These alterations created a “foreign” form of the RPC1 protein, which they say appears to trigger an immune response. The study used blood and tumor tissue samples from 16 patients with scleroderma and different kinds of cancer.
Scientists knew that some patients with scleroderma have a higher incidence of cancer. In the most severe scleroderma, patients with antibodies against RPC1 develop cancers around the time of their diagnosis more frequently than patients who have other antibodies.Timing
Rosen’s colleague and study co-investigator Kenneth Kinzler suspected that the POLR3A gene encoding RPC1 might contain mutations that trigger the development of cancer and scleroderma.
Kinzler and Bert Vogelstein, co-directors of the Ludwig Center, scanned the POLR3A gene’s DNA code in tumor samples from eight scleroderma patients with cancer and antibodies against RPC1. Tumors from six of the eight had genetic alterations in the POLR3A gene. All eight patients developed cancers between five months prior to their scleroderma diagnosis and two and a half years after it. The close timing of patients’ cancer and scleroderma diagnoses suggests that the two are linked, say the scientists.
“As early cancers grow, the body is exposed to novel proteins caused by the mutations in the cancer and potentially opens a window to development of autoimmune disease,” Vogelstein says.
The scientists found no POLR3A gene mutations in tumor samples from another eight scleroderma patients lacking antibodies against RPC1. These patients also developed cancers, but most long after their diagnosis with scleroderma, with half getting cancer more than 14 years later.
Study results may also help explain why, in some cases, people cured of cancer have also seen their scleroderma disappear.
“This study speaks to the power of the immune system and the emerging picture of harnessing the immune system to treat cancer, adding support to the notion that the immune system may be keeping cancers in check naturally,” says Kinzler, a professor of oncology at the Kimmel Cancer Center.
The National Institutes of Health, the Virginia and D. K. Ludwig Fund for Cancer Research, the Donald B. and Dorothy L. Stabler Foundation, the Scleroderma Research Foundation, and the Rheumatology Research Foundation Bridge Funding Award supported the study.
Source: Johns Hopkins University
Women with moderate to severe menstrual cramps may find relief in a class of erectile dysfunction drugs, a new small study shows.
Primary dysmenorrhea, also called PD, is the most common cause of pelvic pain in women. The current treatment is non-steroidal anti-inflammatory drugs, such as ibuprofen. However, ibuprofen does not work well for all women, and can be associated with ulcers and kidney damage when used chronically, as it often is for PD.Related Articles On Futurity
- Triggers differ for addicted men, women
- Penn StateHot flashes dwindle the day after exercise
- Johns Hopkins UniversityProtein 'partner' helps breast cancer spread
Sildenafil citrate, sold under the brand name Viagra, may help with pelvic pain because it can lead to dilation of the blood vessels. Previous research shows that taking it orally can alleviate pelvic pain, but the incidence of side effects—often headaches—may be too high for routine use.
The researchers looked at administering sildenafil citrate vaginally, which had not yet been tried, to treat PD. They compared pain relief from use of sildenafil vaginally with that of a placebo.
For the study, published in the journal Human Reproduction, researchers recruited women 18 to 35 years old who suffered from moderate to severe PD. Of the 29 women screened for the study, 25 were randomized to receive either sildenafil or a placebo drug.
Patients rated their pain over four consecutive hours. Sildenafil citrate administered vaginally alleviates acute menstrual pain with no reported side effects. Researchers hypothesized that the drug would alleviate pain, which it does, but also that is does so by increasing blood flow. However, because uterine blood flow increased from both sildenafil and the placebo, the reason it alleviates pain is not yet known.Possible treatment option
“If future studies confirm these findings, sildenafil may become a treatment option for patients with PD,” says Richard Legro, professor of obstetrics and gynecology and public health sciences at Penn State.
“Since PD is a condition that most women suffer from and seek treatment for at some points in their lives, the quest for new medication is justified.”
Larger studies must be completed to validate the small sample of this study, and additional research is needed to see whether sildenafil changes the menstrual bleeding pattern.
Researchers at the BetaPlus Center for Reproductive Medicine in Croatia contributed to the study that was funded by the National Institutes of Health.
Source: Penn State
Older adults who have 20/20 vision in their eye doctors’ offices may not see as well at home—but they may just need brighter bulbs in their lamps.
“It’s very common for older patients to have concerns about their vision but then test well on the eye charts when we examine them,” says first author Anjali M. Bhorade, associate professor of ophthalmology and visual sciences at Washington University in St. Louis and an ophthalmologist at Barnes-Jewish Hospital.Related Articles On Futurity
- University of ChicagoWith eye-like focus, animals sniff out smells
- University of SheffieldRepair damaged eyes with stem cell discs
- University of Melbourne‘Pre-bionic’ eye lets woman see light
“In this study, we found that vision in patients’ homes was significantly worse than in the clinic. The major factor contributing to this difference was poor lighting in the home.”
For a study published online in JAMA Ophthalmology, researchers studied 175 patients ages 55-90, including 126 with glaucoma. All patients had their vision measured at home and at the Glaucoma and Comprehensive Eye Clinics at the School of Medicine.
The average scores on vision tests were better in the clinic than at home. Nearly 30 percent of the patients with glaucoma were able to read at least two or more lines extra on an eye chart in the clinic than on the same chart at their homes, and 39 percent of those with advanced glaucoma read three or more additional lines in the clinic.
The same results were observed with up-close vision. More than 20 percent of patients were able to read two or more additional lines of text at the doctor’s office than they did at home.
“Older adults with and without glaucoma had similar differences in vision between the clinic and home,” Bhorade says. “These differences occurred not only with distance and near vision, but with contrast sensitivity and glare testing, too. The biggest difference we observed was for distance vision in patients with advanced glaucoma. They had even bigger declines in vision at home.”Poor lighting
“The lighting levels were below the recommended range in more than 85 percent of the homes we visited,” Bhorade says. “Since most older adults spend the majority of time at home, our study suggests that better lighting may increase vision and possibly improve the quality of life for a large number of people. The houses we visited were almost three to four times less bright than an average clinic.”
Although the study didn’t look specifically at potential dangers associated with low light, such as falls, other research has determined that a difference of two or more lines on an eye chart is associated with a significant difference in how a person functions in daily life.
The findings suggest that there may be a simple solution to ensure that older adults can function at their maximum potential, Bhorade says.
“Increased lighting in the home may improve significantly vision for older adults. Our study results also suggest that not all older adults benefit from increased lighting.
“Clinicians should refer their patients for a customized in-home evaluation by an occupational therapist or low-vision rehabilitation specialist who can make suggestions to optimize the lighting in people’s homes.”
Funding for the research comes from the National Eye Institute and the National Institute on Aging of the National Institutes of Health, as well as Pfizer, the American Glaucoma Society, the Harvey A. Friedman Center for Aging and Dr. John Morris grant, unrestricted grants from Research to Prevent Blindness, and the Washington University Institute of Clinical and Translational Sciences Multidisciplinary Clinical Research Development Program.
The post Why seniors’ vision is better in the doctor’s office appeared first on Futurity.
Marketers could soon use the images you post on social media to figure out your “top-of-mind” associations with their brands.
Using five million such images, researchers have taken a first step toward this capability in a new study.
Eric Xing, associate professor of machine learning, computer science, and language technologies at Carnegie Mellon University, and Gunhee Kim, then a Ph.D. student in computer science, looked at images associated with 48 brands in four categories—sports, luxury, beer, and fast food. The images came from popular photo-sharing sites such as Pinterest and Flickr.
Their automated process produced clusters of photos that are typical of certain brands—watch images with Rolex, tartan plaid with Burberry. But some of the highly ranked associations underscored the type of information particularly associated with images and especially with images from social media sites.
For instance, clusters for Rolex included images of horse-riding and auto-racing events, which the watchmaker sponsored. Many wedding clusters were highly associated with the French fashion house of Louis Vuitton.Related Articles On Futurity
- Carnegie Mellon UniversityPatients don't want to be bargain hunters
- Yale UniversityPhoto reveals Africa’s cryptic cat
- Cornell UniversityElmo helps kids pick healthier school lunch
Both instances, Kim notes, are events where people tend to take and share lots of photos, each of which is an opportunity to show brands in the context in which they are used and experienced.Photo analysis
Marketers are always trying to get inside the heads of customers to find out what a brand name makes them think or feel. What does “Nike” bring to mind? Tiger Woods? Shoes? Basketball?
Researchers have used questionnaires to gather this information, but, with the advent of online communities, more emphasis is being placed on analyzing texts that people post to social media.
“Now, the question is whether we can leverage the billions of online photos that people have uploaded,” says Kim, now with Disney Research Pittsburgh. Digital cameras and smartphones have made it easy for people to snap and share photos from their daily lives, many of which relate in some way to one brand or another.
“Our work is the first attempt to perform such photo-based association analysis,” Kim says. “We cannot completely replace text-based analysis, but already we have shown this method can provide information that complements existing brand associations.”From images to ads
Kim and Xing obtained photos that people had shared and had tagged with one of 48 brand names. They developed a method for analyzing the overall appearance of the photos and clustering similar appearing images together, providing core visual concepts associated with each brand.
They also developed an algorithm that would then isolate the portion of the image associated with the brand, such as identifying a Burger King sign along a highway, or adidas apparel worn by someone in a photo.
Kim emphasizes that this work represents just the first step toward mining marketing data from images. But it also suggests some new directions and some additional applications of computer vision in electronic commerce.
For instance, it may be possible to generate keywords from images people have posted and use those keywords to direct relevant advertisements to that individual, in much the same way sponsored search now does with text queries.
Kim will present the research December 7 at the IEEE Workshop on Large Scale Visual Commerce in Sydney, Australia, and at WSDM 2014, an international conference on search and data mining on the web, February 24-28 in New York City. The National Science Foundation and Google supported the work.
Source: Carnegie Mellon University
The post Your photos reveal how you think of certain brands appeared first on Futurity.
An extremely thin layer of clay sediment below the ocean floor is a primary cause of the huge tsunami associated with the 2011 Japan earthquake, according to new research.
Using the deep sea drilling vessel Chikyu from the Integrated Ocean Drilling Program (IODP), Fred Chester, chair in geology at Texas A&M University, and a team of geoscientists participated in the drilling of three boreholes in the Japan Trench about 150 miles east of Japan about 13 months after the quake, and giant tsunami it triggered, devastated the country.Related Articles On Futurity
- University of LeedsStudy: Shanghai most vulnerable to flooding
- University of California, DavisSeawater is risky coolant for nuclear fuel
- Monash UniversityAncient Andes suggest way to predict quakes
The researchers were able to locate, sample, and place sensitive instruments along the Tohoku earthquake fault, which is the tectonic plate boundary between Japan and the Pacific plate. The variety of data collected shows the fault consists of a very thin layer of water-swelling clay that acts as a form of lubricant during an earthquake slip. The findings are published in the journal Science.
“Apparently, the slippery clay lining of the fault minimizes any braking action once the fault starts to move,” Chester explains. “This likely contributed to the very large offset of the seafloor at the trench that spawned the tsunami. It was more slippery than anyone had believed.”The ‘largest ever measured’
The Tohoku quake occurred in a subduction zone, a boundary between two tectonic plates in which one plate is diving beneath another, Chester says. It created a “slip” of about 150 feet, “which in earthquake terms, is among the largest ever measured, and it was unexpected by many earthquake scientists that the fault ruptured all the way to the seafloor,” he notes.
By any measure, the Tohoku quake and giant tsunami rank among the most devastating and costly natural disasters in recent history.
At 9.0 magnitude, it was the largest quake ever to hit Japan and one of the five largest magnitude earthquakes worldwide since accurate recording was started in about 1900. It killed at least 15,800 people, most of them from drowning, displaced more than 340,000 people, and damaged more than 600,000 homes or buildings in Japan.
It severely damaged and resulted in meltdown of three of Japan’s largest nuclear reactors, and the World Bank estimated damages from the quake at $235 billion, making it the costliest natural disaster in history. The quake was so large that it resulted in more than 1,000 aftershocks, at least 80 of which were magnitude 6.0 or greater.Tapping a seismic fault
The drilling expedition resulted in a number of record achievements in ocean drilling. Although several similar drilling expeditions had sampled subduction zones before, never before has a seismic fault been drilled and sampled after such a large earthquake or at such great depths. Because of the depth of the Japan Trench, the drill sites used by the team were among the deepest ever, more than four miles deep to the ocean sea floor and then another 2,700 feet underneath it.
“We found that the fault itself is very thin, only about 15 feet thick in the area sampled,” Chester adds. “In comparison, the San Andreas fault in California is more than a mile thick in places.”
Also, the instruments placed across the fault, and then later recovered, provided a direct measurement of the strength of the fault. “The extremely accurate measurements of temperature documenting systematic variations of less than a half a degree, provide the first ever determination of the absolute strength of a fault during an earthquake,” Chester notes.
He adds that the findings strongly suggest that the area could be prone to more quakes in the future.
“When an earthquake releases stress in one area, it transfers it to another area,” he says. “So the stress is released in the area of the Tohoku rupture, but it is increased in neighboring sections along the Japan Trench. Hopefully this and other scientific research of the Tohoku event will improve our ability to estimate the probability of other events in the future.”
For more about the drilling expedition, visit the Japan Agency for Marine-Earth Science and Technology online.
The Integrated Ocean Drilling Program, which is funded by a number of entities acting as international partners, including Japan’s Ministry of Education, Culture, Sports, Science, and Technology, and the US National Science Foundation, supported the research.
Source: Texas A&M University
In Taiwan, mothers showed signs of depression and experienced declines in overall health after the death of an adult son, but not a daughter, say researchers.
The same effect did not hold true for fathers after the death of an adult child of either gender.
In East Asian cultures, an adult son’s role in the family is crucial to the wellbeing and financial stability of his parents, the researchers suggest. Therefore, a traumatic event, like the death of a son, could place quite a strain on elderly parents living in these cultures, particularly women, especially if the deceased son is the eldest or only son.Related Articles On Futurity
- University of Leeds‘Men get math’ fails to explain gender gap
- RutgersWomen lawyers firm on flexible workplace
- University of RochesterSenior road warriors losing the safety battle
The findings, published in the journal Social Science & Medicine, are based on data from the Taiwanese Longitudinal Study of Aging, a nationally representative survey designed to assess the health of older people in Taiwan.
“In East Asian cultures like Taiwan, sons hold the primary responsibility for providing financial and instrumental assistance to their elderly parents,” says lead author Chioun Lee, a postdoctoral research associate at the Office of Population Research in Princeton University’s Woodrow Wilson School.
“Older women who have had particularly few educational and occupational opportunities are more likely to rely on their sons for support. Therefore, a traumatic event, like a son’s death, could place quite a strain on a mother’s health.”Measuring well-being
Lee and colleagues used data collected for the Taiwanese Longitudinal Study of Aging from 1996 to 2007, which included approximately 4,200 participants.
To evaluate parental wellbeing, they used two self-reported measures: one for overall health and another for depressive symptoms. Each respondent’s health was assessed based on the following question: “Regarding your current state of health, do you feel it is excellent, good, average, not so good, or poor?”
The items were coded on a one-to-five-point scale with higher scores indicating better health. Past studies have indicated that this measure is a strong predictor of mortality.
Depressive symptoms were measured with an eight-item subset of the Center for Epidemiological Studies Depression Scale, which asks participants to report how often they’ve experienced various situations or feelings in the past week. Possible answers range from “0,” which means rarely or none of the time, to “3,” which is most or all of the time. Higher scores for the eight items indicate more frequent depressive symptoms.
The researchers controlled for parental wellbeing prior to the death of a child and analyzed the data in two stages. First, they tested the extent to which a child’s death affected a parent’s health and then whether that varied by the parent’s sex. Finally, they determined the influence of a deceased child’s sex on parental wellbeing.
They found that women who lost a son scored, on average, 2.4 points higher on levels of depressive symptoms than those who did not lose a child. For men, there were no significant differences.Gender inequality
There was no evidence to suggest that either mothers or fathers were significantly affected by depressive symptoms or declines in reports of overall health following the death of a daughter. Lee explains that while finances are a concern, there may be other factors at play.
“I also think that various attributes of deceased children, such as birth order, affective bonds with their parents or cause of death, might influence parental wellbeing,” says Lee, who is a native of Korea and observed son preference and gender inequality throughout her childhood.
According to co-author Noreen Goldman, professor of demography and public affairs, these findings underscore the continued gender inequality in Taiwan.
“Despite large advances in women’s labor market participation and educational attainment in recent years—for example, women in Taiwan are now more likely than men to hold a higher education degree—son preference persists, affecting various aspects of women’s well-being,” Goldman says.
Researchers from Georgetown University also contributed to the study.
Source: Princeton University
When scientists compared great white gene shark genes with those of humans and zebrafish, they found that the shark genes were surprisingly similar to humans’.
The genetic code of the world’s oldest ocean predator is so effective, scientists say, it has barely changed since before the time when dinosaurs roamed the earth.Related Articles On Futurity
- Stanford UniversityFukushima seafood is safe to eat
- University of California, DavisMale fish dominate at polluted beach near San Francisco
- Penn StateOdd lemur’s habitat limits its gene pool
A new study in BMC Genomics lays the foundation for genomic exploration of sharks and vastly expands genetic tools for their conservation, says Michael Stanhope, professor of evolutionary genomics at Cornell University.
“We were very surprised to find, that for many categories of proteins, sharks share more similarities with humans than zebrafish,” he says. “Although sharks and bony fishes are not closely related, they are nonetheless both fish … while mammals have very different anatomies and physiologies.
“Nevertheless, our findings open the possibility that some aspects of white shark metabolism, as well as other aspects of its overall biochemistry, might be more similar to that of a mammal than to that of a bony fish.”
The study launched when Stanhope and Mahmood Shivji, professor at Nova Southeastern University received a Save Our Seas Foundation grant and a rare gift of a great white shark heart. The heart had been autopsied from an illegally fished shark, confiscated by government authorities and donated to the project.
Of particular interest was that white shark had a closer match to humans for proteins involved in metabolism.
“Sharks have many fascinating characteristics,” Stanhope says. “Some give live birth to fully formed young, while some lay eggs. In some species, the embryos eat the remaining eggs or even other embryos while still developing in the uterus.
“Some can dive very deep, others cannot. Some stay local; others migrate across the entire ocean basins. White sharks dive deep, migrate very long distances and give live birth. We will use what we’ve learned in this species in a broader comparative study of genes involved in these diverse behaviors.”
Because sharks are apex predators, their decreasing number threatens the stability of marine ecosystems, on which millions of people rely for food.
The new study also increased the number of genetic markers scientists can use to study the population biology of great white and related sharks by a thousandfold.
Source: Cornell University
Hummingbirds are equally good at burning both components of sugar—glucose and fructose. It’s a unique trait that other vertebrates don’t have.
The tiny birds can power all of their energetic hovering flight by burning the sugar contained in the floral nectar of their diet.
“Hummingbirds have an optimal fuel-use strategy that powers their high-energy lifestyle, maximizes fat storage, and minimizes unnecessary weight gain all at the same time,” says Kenneth Welch, assistant professor of biological sciences at University of Toronto Scarborough.
Welch and his graduate student Chris Chen, co-author of the research, fed hummingbirds separate enriched solutions of glucose and fructose while collecting exhaled breath samples. They found the birds were able to switch from burning glucose to fructose equally as well.
“What’s very surprising is that unlike mammals such as humans, who can’t rely on fructose to power much of their exercise metabolism, hummingbirds use it very well. In fact, they are very happy using it and can use it just as well as glucose,” says Welch.Related Articles On Futurity
- Duke UniversityPathogenic fungus craves your brain sugar
- University of California, Santa BarbaraBitterness blunts sugar's buzz for fruit flies
- Yale UniversityFor ancient birds, feathers were a drag
Hummingbirds require an incredible amount of energy to flap their wings 50 times or more per second in order to maintain hovering flight. In fact, if a hummingbird were the size of a human, it would consume energy at a rate more than 10 times that of an Olympic marathon runner.
Hummingbirds are able to accomplish this by burning only the most recently ingested sugar in their muscles while avoiding the energetic tax of first converting sugar into fat.
From an evolutionary perspective the findings make perfect sense, says Welch. Whereas humans evolved over time on a complex diet, hummingbirds evolved on a diet rich in sugar.
“Hummingbirds are able to move sugar from their blood to their muscles at very fast rates, but we don’t yet fully understand how they are able to do this,” he says.Human diets
Humans are not good at burning fructose because once ingested much of it gets taken into the liver where it’s turned into fat. The prevalence of high fructose corn syrup found in products like soda is also strongly linked to a rise in obesity rates.
On the other hand because hummingbirds burn sugar so fast that if they were the size of an average person they would need to drink more than one can of soda every minute even though it’s mostly made of high-fructose corn syrup.
“If we can gain insights on how hummingbirds cope with an extreme diet then maybe it can shed some light on what goes wrong in us when we have too much fructose in our diet,” says Welch.
The research appears online in the journal Functional Ecology.
Source: University of Toronto
Too much noise from shipping can stress out marine mammals, so scientists have developed a technique to monitor ship traffic and noise in a protected dolphin habitat in Scotland.
The effort is focused on the Moray Firth, the country’s largest inlet and home to a population of bottlenose dolphins and various types of seals, porpoises, and whales. This protected habitat also houses construction yards that feed Scotland’s ever-expanding offshore wind sector.Related Articles On Futurity
- University of ChicagoAfter 20 years, dolphin recalls old pal’s whistle
- University of WashingtonEarthquake sensors on seafloor track whale songs
- University College LondonWhy nasty noises make us squirm
Projected increases in wind farm construction are expected to bring more shipping through the habitat—something scientists think could have a negative impact on resident marine mammals.
“Different ships emit noise at different levels and frequencies, so it’s important to know which types of vessels are crossing the habitats and migration routes of marine mammals,” says Nathan Merchant, a postdoctoral researcher at Syracuse University. “The cumulative effect of many noisy ship passages can raise the physiological stress level of marine mammals and affect foraging behavior.”
Merchant says underwater noise levels have been increasing over recent decades. “These changes in the acoustic environment affect marine mammals because they rely on sound as their primary sensory mode. The disturbance caused by this man-made noise can disrupt crucial activities like hunting for food and communication, affecting the fitness of individual animals.”
He adds: “Right now, the million-dollar question is: Does this disturbance lead to changes in population levels of marine mammals? That’s what these long-term studies are ultimately trying to find out.”
Due to a lack of reliable baseline data, Merchant and his collaborators at the University of Aberdeen have figured out how to monitor underwater noise levels, using ship-tracking data and shore-based time-lapse photography.
These techniques, detailed in a study published in Marine Pollution Bulletin, form a ship-noise assessment toolkit, which Merchant says may be used to study noise from shipping in other habitats.
Source: Syracuse University
People can often tell how tall or short you are just by the sound of your voice, research shows.
The key may be in a particular type of sound produced in the lower airways of the lungs, known as a subglottal resonance.Related Articles On Futurity
- Washington University in St. LouisSteroid inhalers cut half inch off height
- University of MichiganWhy human height's messy business
- University of SouthamptonLonger body, longer life?
“The best way to think about subglottal resonances is to imagine blowing into a glass bottle partially full with liquid: the less liquid in the bottle, the lower the sound,” says John Morton, a psychology graduate student at Washington University in St. Louis.
The frequency of the subglottal resonance differs depending on the height of the person generating it, with resonances becoming progressively lower as height increases.
Morton presented his findings Dec. 3 at the meeting of the Acoustical Society of America.
“In humans, the resonances are part of a larger group of sounds, which are sort of like an orchestra playing over the sound being made from the glass bottle. (The glass bottle) sound is still there, but it isn’t easy to hear.”
Despite the masking of the subglottal resonance by other voice sounds, researchers wondered if the key information it contained could still be heard by listeners.
Through two sets of experiments, they put the theory to the test. In the first, pairs of same-sexed “talkers” of different heights were recorded as they read identical sentences. Later, the recordings were played to listeners who guessed which of the two speakers was the tallest.
In the second experiment, listeners ranked five talkers (again of the same gender) from tallest to shortest, after hearing them read.
The researchers found that participants were able to accurately discriminate the taller speaker 62.17 percent of the time, which is significantly more often than they would by chance alone, Morton says.
“Both males and females were equally able to discriminate and rank the heights of talkers” of both genders.
The research has criminal justice implications, Morton says.
“One would certainly like to know if, when an ‘ear witness,’ as they are often called, says that a talker’s voice seemed ‘tall’ or ‘large,’ this information can be trusted. The answer seems to be yes.”
Mice with symptoms similar to autism improved after they were treated with probiotic therapy.
The results, reported in the journal Cell, offer the first evidence that changes in gut bacteria can influence autism-like behaviors in mice.
Many people with autism spectrum disorder (ASD) suffer from gastrointestinal (GI) issues, such as abdominal cramps and constipation.
“Traditional research has studied autism as a genetic disorder and a disorder of the brain, but our work shows that gut bacteria may contribute to ASD-like symptoms in ways that were previously unappreciated,” says Sarkis K. Mazmanian, a biology professor at the California Institute Technology (Caltech). “Gut physiology appears to have effects on what are currently presumed to be brain functions.”
To study this gut–microbiota–brain interaction, the researchers used a mouse model of autism previously developed at Caltech in the laboratory of Paul H. Patterson, a professor of biological sciences.Leaky gut Related Articles On Futurity
- Michigan State UniversityAutism more severe in kids born early or late
- Brandeis UniversityThere's a thermostat that stops neurons from spazzing out
- University of California, DavisProtein oversees junctures in brain
In humans, having a severe viral infection raises the risk that a pregnant woman will give birth to a child with autism. Patterson and his lab reproduced the effect in mice using a viral mimic that triggers an infection-like immune response in the mother and produces the core behavioral symptoms associated with autism in the offspring.
In the new Cell study, Mazmanian, Patterson, and colleagues found that the “autistic” offspring of immune-activated pregnant mice also exhibited GI abnormalities. In particular, the GI tracts of autistic-like mice were “leaky,” which means that they allow material to pass through the intestinal wall and into the bloodstream.
This characteristic, known as intestinal permeability, has been reported in some autistic individuals.
“To our knowledge, this is the first report of an animal model for autism with comorbid GI dysfunction,” says Elaine Hsiao, a senior research fellow at Caltech and the first author on the study.
To see whether these GI symptoms actually influenced the autism-like behaviors, the researchers treated the mice with Bacteroides fragilis, a bacterium that has been used as an experimental probiotic therapy in animal models of GI disorders.
The result? The leaky gut was corrected.Human trails
In addition, observations of the treated mice showed that their behavior had changed. In particular, they were more likely to communicate with other mice, had reduced anxiety, and were less likely to engage in a repetitive digging behavior.
“The B. fragilis treatment alleviates GI problems in the mouse model and also improves some of the main behavioral symptoms,” Hsiao says. “This suggests that GI problems could contribute to particular symptoms in neurodevelopmental disorders.”
With the help of clinical collaborators, the researchers are now planning a trial to test the probiotic treatment on the behavioral symptoms of human autism. The trial should begin within the next year or two, says Patterson.
“This probiotic treatment is postnatal, which means that the mother has already experienced the immune challenge, and, as a result, the growing fetuses have already started down a different developmental path,” Patterson says. “In this study, we can provide a treatment after the offspring have been born that can help improve certain behaviors. I think that’s a powerful part of the story.”
The researchers stress that much work is still needed to develop an effective and reliable probiotic therapy for human autism—in part because there are both genetic and environmental contributions to the disorder, and because the immune-challenged mother in the mouse model reproduces only the environmental component.
“Autism is such a heterogeneous disorder that the ratio between genetic and environmental contributions could be different in each individual,” Mazmanian says. “Even if B. fragilis ameliorates some of the symptoms associated with autism, I would be surprised if it’s a universal therapy—it probably won’t work for every single case.”
The Caltech team proposes that particular beneficial bugs are intimately involved in regulating the release of metabolic products (or metabolites) from the gut into the bloodstream. Indeed, the researchers found that in the leaky intestinal wall of the autistic-like mice, certain metabolites that were modulated by microbes could both easily enter the circulation and affect particular behaviors.
“I think our results may someday transform the way people view possible causes and potential treatments for autism,” Mazmanian says.
The work was supported by a Caltech Innovation Initiative grant, an Autism Speaks Weatherstone Fellowship, a National Institutes of Health/National Research Service Award Ruth L. Kirschstein Predoctoral Fellowship, a Human Frontiers Science Program Fellowship, a Department Of Defense Graduate Fellowship, a National Science Foundation Graduate Research Fellowship, an Autism Speaks Trailblazer Award, a Caltech Grubstake award, a Congressionally Directed Medical Research Award, a Weston Havens Foundation Award, several Callie McGrath Charitable Foundation awards, and the National Institute of Mental Health.
An analysis of the femur of one of the oldest human ancestors reveals that the six-million-year-old “Millenium Man” was bipedal but lived in the trees.
The research, published in Nature Communications, could provide additional insight to the origins of human bipedalism.
In the paper, lead investigator Sergio Almécija, a research instructor from the department of anatomical sciences at Stony Brook University School of Medicine, and co-authors clarify and contextualize the place of Orrorin tugenensis, or Millenium Man, in human and ape evolution.
The team completed 3D geometric morphometric analyses on the shape and characteristics of the femur of Orrorin, which reveals its morphology to be an “intermediate” between fossil apes and later human ancestors (hominins).
The findings open a new avenue in bipedal evolution research as they illustrate that hominins and living apes evolved in different directions from fossil apes from the Miocene (23 to 5 million years ago).Related Articles On Futurity
- Indiana UniversityAngry birds may reveal evolution in action
- Stony Brook UniversityPlacental mammals evolved from small bug eater
- California Institute of TechnologyLife on Earth before the rise of oxygen
Millenium Man is a fossil from East Africa and considered to be one of the best candidate species for what may be called the earliest hominins. However, some scientists have questions its hominin status.
Miocene apes are fossil relatives of the ape-human lineage with body shapes somewhere in between living monkeys and apes. Most Miocene apes walked on their fours in the trees instead of suspending themselves below branches.Comparing bones
According to Almécija, the study for the first time compared the six-million-year-old Millenium Man femur (called BAR 1002’00) using state-of-the-art morphometric techniques to not only other available hominin fossils but also great apes, hylobatids (i.e., gibbons and siamangs), and most importantly to fossil apes that lived in the Miocene. The analysis included more than 400 specimens.
“We discovered that Orrorin’s femur is surprisingly ‘intermediate’ in both age and anatomy between quadrupedal Miocene apes and bipedal early human ancestors,” says Almécija.
“Our paper provides quantitative results of the Orrorin femur as a unique mosaic and stresses the need to incorporate fossil apes into future analyses and discussions dealing with the evolution of human bipedalism, an investigation that should stop considering chimpanzees as default living ‘starting point’ models.”Overlooked apes
A similar take-home message was derived from the extensive analyses of the postcranial skeleton of the 4.4 million-year-old Ardipithecus ramidus from Ethiopia.
Almécija explains that because chimpanzees are our closest living relatives in terms of molecular data, a majority of paleoanthropologists presume that the last common ancestor of chimpanzees and humans looked exactly like a chimpanzee.
For that reason, Miocene apes have been largely ignored in the human origins scientific literature. Although chimpanzees and other great apes can still represent good ancestral models for other anatomical regions, this new study proves that it is not the case of the proximal femur.
Based on the 3D geometric morphometric analyses, the Orrorin femur is most similar overall to Miocene ape Proconsul nyanzae but also closely linked to Australopithecus afarensis (i.e., “Lucy”).Two-legged heritage
Co-author William Jungers, teaching professsor and chair of the department of anatomical sciences, emphasizes that the team’s reconstruction and findings also reveal that some Miocene apes may represent a more appropriate model for the ancestral morphology from which hominins evolved than do existing great apes and particularly the chimpanzee.
“Living apes have long and independent evolutionary histories of their own, and their modern anatomies should not be assumed to represent the ancestral condition for our human lineage,” explains Jungers. “But we need a better understanding of the paleobiology of Miocene apes in order to properly inform us as to how and when walking on two legs became part of our heritage.”
The research leading to the findings was supported in part by the Fulbright Commission and the Generalitat de Catalunya of Spain, the Wenner Gren Foundation, and the National Science Foundation.
Source: Stony Brook University
More than one in three twin births and three of four births of triplets or more in the United States are the result of fertility treatments, new estimates show.
While in-vitro fertilization (IVF) practices have improved to produce fewer triplets or higher-order births than at peak, multiple births from other types of fertility treatments have not slowed.Related Articles On Futurity
- Is fish a good thing during pregnancy?
- Yale UniversityMore women want to reset biological clock
- Michigan State UniversitySocial games may treat infant autism
The proportion of triplets or more related to medical assistance has actually dropped from a peak of 84 percent in 1998 after in vitro fertilization (IVF) guidelines discouraging implantation of three or more embryos took effect that year, a new study reports.
IVF has also improved enough that single embryo transfers now often succeed in producing healthy pregnancies. But in the meantime, non-IVF fertility treatments such as ovarian stimulation and ovulation induction—for instance, with the drug clomiphene citrate—have increased to become the predominant source of medically assisted multiple births in the country, while IVF is increasingly producing twins.
Some mothers and couples may hope for twins through fertility treatments, but more often multiple births are not desired. In those cases, new parents and children incur unwarranted medical risk and long-term financial costs that doctors should strive to prevent, says Eli Y. Adashi, professor of obstetrics and gynecology at Brown University.
“We do have a real problem with way too many multiple births in the United States with consequences to both mothers and babies. It’s an unintended consequence of otherwise well intentioned and remarkable technology.”Multiple births multiply
To arrive at their estimates, researchers gathered data on multiple births from 1962 to 1966 (before any medical fertility treatments were available) and from 1971 through 2011. Data on IVF procedures has been available since 1997, but no data is available that directly reflects the contribution of non-IVF procedures to rates of multiple births.
The team therefore estimated the role of non-IVF technologies by subtracting the multiple births arising from IVF from the overall number of multiple births, while also accounting for the impact of maternal age on birth plurality. The data from the 1960s, meanwhile, provided a statistical baseline for natural multiple birth rates without medical intervention that the team also used in their estimates.
The contribution of fertility treatments over the last 40 years is unmistakable: Between 1971 and 2011, the percent of US births that were multiples doubled to 3.5 percent from 1.8 percent. Even after adjusting for maternal age, the rate of twin births rose 1.6 times between 1971 and 2009.
And while triplets or more due to IVF have dropped to 32 percent of cases from 48 percent between 1998 and 2011, the percent of triplets or more due to non-IVF procedures rose to 45 percent of cases from 36 percent during that same time.Hard to control
“IVF is moving, in a sense, in the right direction and cleaning up its act, whereas the non-IVF technologies are at a minimum holding their own and possibly getting worse,” Adashi says.
“From a policy point of view what that means is that (we) need to focus on the non-IVF technologies, which really hasn’t been done in a concerted way because they weren’t considered all that relevant to this mix.”
Ultimately, it may be harder to curb multiple births from non-IVF treatments than from IVF. While multiple births from IVF are a direct result of the number of embryos that are fertilized and intentionally implanted, non-IVF therapies involve medications that stimulate ovulation and follicle growth in ways that cannot be precisely predicted or controlled.
The new estimates will at least focus more attention on the major contribution non-IVF treatments make on multiple births, the authors write in the paper published in the New England Journal of Medicine.
That may spur improved data gathering, such as creation of a registry of non-IVF treatments and outcomes, and ultimately more careful practice regimens.
“Increased awareness of multiple births resulting from non-IVF fertility treatments may lead to improved medical practice patterns and a decrease in the rate of multiple births” the paper concludes.
Researchers from Johns Hopkins University, the US Center for Disease Control and Prevention, and the Cincinnati Children’s Hospital Medical Center contributed to the study.
Source: Brown University
The post Most triplets born in US are the result of fertility treatments appeared first on Futurity.
There are more than 1,000 alcohol brands on the market, but only four brands show up often in the lyrics of popular songs.
Those brands are: Patron tequila, Hennessy cognac, Grey Goose vodka, and Jack Daniel’s whiskey.
They accounted for more than half of the alcohol brands named in songs from Billboard’s most popular song lists in 2009, 2010, and 2011.Related Articles On Futurity
- McGill UniversityThere's math hiding in the music we love
- University of California, BerkeleyUS majority says ‘Do Not Mail’
- University of Southern CaliforniaFit teen boys smarter, better educated
“You would expect there would be hundreds of brands that are randomly mentioned,” says Michael Siegel, a professor of community health sciences at Boston University’s School of Public Health. “But we found that those top four accounted for 52 percent of all the brand mentions. That can’t be coincidental.”
The findings—published in the journal Substance Use & Misuse—raise questions about the relationship between alcohol companies and the music industry, in terms of both specific marketing and the larger influence on youth drinking behavior. The study, coauthored by researchers from the Johns Hopkins Bloomberg School of Public Health, is the first to examine in depth the context of the use of specific brand names in music.Marketing to kids
In addition to identifying a small number of brands frequently mentioned in popular music, the study found that alcohol use was portrayed as overwhelmingly positive in lyrics, with negative consequences almost never referred to.
The study—citing the heavy exposure of youths to popular music—said preliminary data about youth alcohol consumption suggests that many of the brands that were recurrently named in songs also are popular drinks for underage drinkers.
The authors called the results “alarming, because they suggest that popular music may be serving as a major source of promotion of alcohol use in general—and of consumption of specific brands in particular—to underage youth.”
But Siegel says that further research is needed to determine a “causal connection” between promotion in music and actual consumption.
What the research did uncover was that the alcohol brands mentioned in songs often had sponsorship or other relationships with the artists—sometimes in the form of concert sponsorships or endorsement agreements.Similar to cigarettes?
For example, Sean “Diddy” Combs is a paid spokesperson for Ciroc vodka and has a $100 million marketing deal with Diageo, the manufacturer of Ciroc. Grey Goose sponsors a television show on Black Entertainment Television that highlights up-and-coming urban music artists. And Patron sponsored a concert that was part of the Austin City Limits Music Festival, which showcased a number of urban artists.
“What we have to recognize is that the placement of brands in music is a form of alcohol marketing,” Siegel says. “It’s similar to when cigarette companies used to pay production companies to feature their brands in movies.
“Alcohol companies are now the ones developing financial relationships to encourage this kind of marketing. It really needs to be recognized as marketing, not random chance.”Urban, pop, country, and rock
Of the 720 songs examined in the review, 167 (23.2 percent) mentioned alcohol, and 46 (6.4 percent) named specific alcohol brands. The leading four brands accounted for 51.6 percent of all alcohol brands specified by name.
The study found that alcohol was most commonly referred to in so-called urban songs (rap, hip-hop, and R&B, with 37.7 percent), followed by country (21.8 percent), and pop (14.9 percent).
At least 14 long-term studies have found that exposure to alcohol marketing in the mass media increases the likelihood that young people will start drinking, or if already drinking, consume more. Adolescents in the United States spend an estimated 2.5 hours a day listening to music.
Siegel and his colleagues used the Billboard listings to identify 720 unique songs in four genres: urban, pop, country, and rock. Three coders analyzed the lyrics of each song to determine alcohol references, brand references, and the context for each.
The researchers found mention of alcohol in 167 songs. Tequila, cognac, vodka, and champagne brands appeared more prevalently in urban music (R&B, hip-hop, and rap), while whiskey and beer brands were more common in country or pop music. Surprisingly, there was no alcohol referred to in the rock-genre music examined.
Only 4 of the 46 songs naming alcohol brands had a negative context, negative consequences, or negative emotion associated with alcohol use, the study found. The majority of songs portrayed alcohol use as “a fun part of the youth lifestyle that is free of consequences,” the authors wrote. “Furthermore, we found evidence that many songs glamorize underage drinking and excessive alcohol consumption and their association with sex and partying.”
Alcohol is responsible for at least 4,700 deaths annually among people under age 21 in the United States. Surveys indicate that more than 70 percent of high school students have consumed alcohol, and about 22 percent engage in heavy episodic drinking.
Siegel says that if further research shows a causal connection between marketing and consumption, there are several interventions that could be adopted—not in an effort to censor music, but instead to educate youths about the marketing influence. One intervention, he says, would be to teach young people “media literacy skills” that would educate them about marketing techniques.
“They’re being used in a way . . . to try to influence their consumption,” Siegel says. “If we can educate them about that, it might mitigate the effect.”
The National Institute on Drug Abuse and the National Institute on Alcohol Abuse and Alcoholism funded the study.
Source: Boston University
The post Music artists love to sing about these 4 alcohol brands appeared first on Futurity.
After capturing the first brain images of two alert, unrestrained dogs last year, researchers have confirmed their methods and results by replicating them in an experiment involving 13 dogs.
The research, published in PLOS ONE, shows that most of the dogs had a positive response in the caudate region of the brain when given a hand signal indicating they would receive a food treat, as compared to a different hand signal for “no treat.”
“Our experiment last year was really a proof of concept, demonstrating that dogs could be trained to undergo successful functional Magnetic Resonance Imaging (fMRI),” says the lead researcher Gregory Berns, director of Emory University’s Center for Neuropolicy.
“Now we’ve shown that the initial study wasn’t a fluke: Canine fMRI is reliable and can be done with minimal stress to the dogs. We have laid the foundation for exploring the neural biology and cognitive processes of man’s best, and oldest, friend.”
Both the initial experiment and the more recent one involved training the dogs to acclimatize to an fMRI machine. The task requires dogs to cooperatively enter the small enclosure of the fMRI scanner and remain completely motionless despite the noise and vibration of the machine.Related Articles On Futurity
- Northwestern UniversityChronic back pain may start in your head
- University of KansasTo get creative, shed brain's thought filter
- University of North Carolina at Chapel HillGene-swap therapy eases rare brain disease
Only those dogs that willingly cooperated were involved in the experiments. The canine subjects were given harmless fMRI brain scans while they watched a human giving hand signals that the dogs had been trained to understand. One signal indicated that the dog would receive a hot dog for a treat. The other hand signal meant that the dog would not receive a hot dog.Human and dog brains
The most recent experiment involved the original two dogs, plus 11 additional ones, of varying breeds. Eight out of the 13 showed the positive caudate response for the hand signal indicating they were going to receive a hot dog.
The caudate sits above the brain stem in mammals and has the highest concentration of dopamine receptors, which are implicated in motivation and pleasure, among other neurological processes.
“We know that in humans, the caudate region is associated with decision-making, motivation, and processing emotions,” Berns says.
As a point of reference, the researchers compared the results to a similar experiment Berns had led 10 years previously involving humans, in which the subjects pressed a button when a light appeared, to get a squirt of fruit juice.
Eleven of 17 humans involved in that experiment showed a positive response in the caudate region that was similar to the positive response of the dogs. “Our findings suggest that the caudate region of the canine brain behaves similarly to the caudate of the human brain, under similar circumstances,” Berns says.Therapy dogs
Six of the dogs involved in the experiment had been specially bred and trained to assist disabled people as companion animals, and two of the dogs (including one of the service dogs) had worked as therapy dogs, used to help alleviate stress in people in hospitals or nursing homes. All of the service/therapy dogs showed a greater level of positive caudate activation for the hot dog signal, compared to the other dogs.
“We don’t know if the service dogs and therapy dogs showed this difference because of genetics, or because of the environment in which they were raised, but we hope to find out in future experiments,” Berns says. “This may be the first hint of how the brains of dogs with different temperaments and personalities differ.”
He adds: “I don’t think it was because they liked hot dogs more. I saw no evidence of that. None of the dogs turned down the hot dogs.”
One limitation of the experiments is the small number of subjects and the selectivity of the dogs involved, since only certain dogs can be trained to do the experiments, Berns says.
“We’re expanding our cohort to include more dogs and more breeds,” Berns says. “As the dogs get more accustomed to the process, we can conduct more complicated experiments.”Future experiments
Plans call for comparing how the canine brain responds to hand signals coming from the dog’s owner, a stranger and a computer. Another experiment already under way is looking at the neural response of dogs when they are exposed to scents of members of their households, both humans and other dogs, and unfamiliar humans and dogs.
“Ultimately, our goal is to map out canine cognitive processes,” says Berns, who recently published a book entitled How Dogs Love Us: A Neuroscientist and His Adopted Dog Decode the Canine Brain.
Even in an increasingly technical era, the role of dogs has not diminished, Berns says. In addition to being popular pets, he notes that dogs are important in the US military, in search-and-rescue missions, as assistants for the disabled, and as therapeutic stress-relievers for hospital patients and others.
“Dogs have been a part of human society for longer than any other animal,” Berns says. He cites a genetic analysis recently published in Science suggesting that the domestication of dogs goes back 18,000 to 32,000 years, preceding the development of agriculture some 10,000 years ago.
“Most neuroscience studies on animals are conducted to serve as models for human disease and brain functions,” Berns says. “We’re not studying canine cognition to serve as a model for humans, but what we learn about the dog brain may also help us understand more about how our own brains evolved.”
Source: Emory University
You don’t really know what your fingers are typing. Skilled typists can’t identify the positions of most keys on the keyboard, and novices don’t appear to learn key locations in the first place, a new study shows.
“This demonstrates that we’re capable of doing extremely complicated things without knowing explicitly what we are doing,” says Vanderbilt University graduate student Kristy Snyder, the first author of the study, which was conducted under the supervision of Centennial Professor of Psychology Gordon Logan.Related Articles On Futurity
- California Institute of TechnologyDistinct strategies help brain take action
- Duke UniversityCan mice learn to change their tune?
- Northwestern UniversityTo learn a skill, practice while you sleep
Researchers recruited 100 university students and members from the surrounding community to participate in an experiment. The participants completed a short typing test. Then, they were shown a blank QWERTY keyboard and given 80 seconds to write the letters in the correct location. On average, they typed 72 words per minute, moving their fingers to the correct keys six times per second with 94 percent accuracy. By contrast, they could accurately place an average of only 15 letters on a blank keyboard.
The fact that the typists did so poorly at identifying the position of specific keys didn’t come as a surprise. For more than a century, scientists have recognized the existence of automatism: the ability to perform actions without conscious thought or intention. Automatic behaviors of this type are surprisingly common, ranging from tying shoelaces to making coffee to factory assembly-line work to riding a bicycle and driving a car. So scientists had assumed that typing also fell into this category, but had not tested it.Keyboard “memory”?
What did come as a surprise, however, was a finding that conflicts with the basic theory of automatic learning, which suggests that it starts out as a conscious process and gradually becomes unconscious with repetition.
According to the widely held theory—primarily developed by studying how people learn to play chess—when you perform a new task for the first time, you are conscious of each action and store the details in working memory. Then, as you repeat the task, it becomes increasingly automatic and your awareness of the details gradually fades away. This allows you to think about other things while you are performing the task.
Given the prevalence of this “use it or lose it” explanation, the researchers were surprised when they found evidence that the typists never appear to memorize the key positions, not even when they are first learning to type.
“It appears that not only don’t we know much about what we are doing, but we can’t know it because we don’t consciously learn how to do it in the first place,” says Logan. The study, which includes coauthors at Kobe University in Japan, is available online in the journal Attention, Perception & Psychophysics.
Evidence for this conclusion came from another experiment included in the study.
The researchers recruited 24 typists who were skilled on the QWERTY keyboard and had them learn to type on a Dvorak keyboard, which places keys in different locations. After the participants developed a reasonable proficiency with the alternative keyboard, they were asked to identify the placement of the keys on a blank Dvorak keyboard. On average, they could locate only 17 letters correctly, comparable to participants’ performance with the QWERTY keyboard.
According to the researchers, the lack of explicit knowledge of the keyboard may be due to the fact that computers and keyboards have become so ubiquitous that students learn how to use them in an informal, trial-and-error fashion when they are very young.
“When I was a boy, you learned to type by taking a typing class and one of the first assignments was to memorize the keyboard,” Logan recalled.
The National Science Foundation funded the research.
Source: Vanderbilt University
The post Why you can type without ever learning the keyboard appeared first on Futurity.
When animals eat tiny pieces of plastic, toxic concentrations of pollutants and additives enter their tissues, new research shows.
With global production of plastic exceeding 280 metric tons every year, a fair amount of the stuff is bound to make its way to the natural environment. However, until now researchers haven’t known whether ingested plastic transfers chemical additives or pollutants to wildlife. The findings appear in Current Biology.
Lead author Mark Anthony Browne, a postdoctoral fellow at University of California, Santa Barbara’s National Center for Ecological Analysis and Synthesis (NCEAS), had two objectives when the study began: to look at whether chemicals from microplastic move into the tissues of organisms; and to determine any impacts on the health and the functions that sustain biodiversity.
Related Articles On Futurity
- University of California, DavisFlushing ‘Nemo’ risks lionfish invasion
- University of SouthamptonJellyfish 'bloom' may be a bust
- University of ArizonaMost abundant ocean viruses attack bacteria
Microplastics are micrometer-size pieces that have eroded from larger plastic fragments, from fibers from washing clothing or from granules of plastic added to cleaning products.
A variety of animals, beginning with the bottom of the food chain, consume them. The tiny bits of plastic act like magnets, attracting pollutants out of the environment to attach to the plastic.
“The work is important because current policy in the United States and abroad considers microplastic as non-hazardous,” Browne says. “Yet our work shows that large accumulations of microplastic have the potential to impact the structure and functioning of marine ecosystems.”
Browne ran laboratory experiments with colleagues in the United Kingdom in which they exposed lugworms (Arenicola marina) to sand with five percent microplastic (polyvinylchloride) that also contained common chemical pollutants (nonylphenol, phenanthrene) and additives (triclosan, PBDE-47).
Results show that pollutants and additives from ingested microplastic were present in the worms’ tissues at concentrations that compromise key functions that normally sustain health and biodiversity.
“In our study, additives, such as triclosan (an antimicrobial), that are incorporated into plastics during manufacture caused mortality and diminished the ability of the lugworms to engineer sediments,” Browne says.
“Pollutants on microplastics also increased the vulnerability of lugworms to pathogens while the plastic itself caused oxidative stress.”In lugworm guts
Lugworms aren’t a random choice for test subjects. They are found in the United States and Europe, where they comprise up to 32 percent of the mass of organisms living on some shores, and are consumed by birds and fish and used as bait by fishermen.
When the worms feed, they strip the sediment of silt and organic matter, giving rise to a unique and diverse number of species. Consequently, governments use this species to test the safety of chemicals that are discharged in marine habitats.
“They also suffer from mass mortalities during the summer,” Browne says of the worms. “In the areas where a lot of the mortalities occurred, there has been extensive urban development so some mass mortalities could be potentially tied to plastic.
“On a hot summer’s day when the tide is out, these organisms cook slightly because their hydrogen peroxide levels increase. And we found that the plastic itself reduces the capacity of antioxidants to mop up the hydrogen peroxide.”
Although sand transferred larger concentrations of pollutants—up to 250 percent—into the worm’s tissues, pollutants and additives from microplastic accumulated in the gut at concentrations between 326 percent and 3,770 percent greater than those in experimental sediments.Trouble with Triclosan
The pollutant nonylphenol from microplastic or sand suppressed immune function by more than 60 percent. Triclosan from microplastic diminished the ability of worms to engineer sediments and caused mortality, each by more than 55 percent.
Triclosan, an antibacterial additive, has been found in animal studies to alter hormone regulation. Microplastic also increased the worms’ susceptibility to oxidative stress by more than 30 percent.
These chemicals are known as priority pollutants, chemicals that governments around the world have agreed are the most persistently bioaccumulative and toxic. Previous work conducted by Browne and his colleagues showed that about 78 percent of the chemicals recognized by the US Environmental Protection Agency are associated with microplastic pollution.
“We’ve known for a long time now that these types of chemicals transfer into humans from packaged goods,” Browne says. “But for more than 40 years the bit that the scientists and policymakers didn’t have was whether or not these particles of plastic can actually transfer chemicals into wildlife and damage the health of the organism and its ability to sustain biodiversity. That’s what we really nailed with the study.”
Source: UC Santa Barbara