England’s White Cliffs Are Crumbling at an Accelerated Rate

Image credit: 

Diliff via Wikimedia Commons // CC BY-SA 3.0

Buddhist nun Pema Chödron may have said it best: “Everything—every tree, every blade of grass, all the animals, insects, human beings, buildings, the animate and the inanimate—is always changing, moment to moment.” Though they often operate on a timescale that can be beyond our human perception, geological features are not exempt from the flow of time. In some places, that change happens fast. Geologists say chalk cliffs on England’s southern shore are eroding 10 times faster than they once did. The researchers published their findings in the Proceedings of the National Academy of Sciences.

The picturesque chalk cliffs known as the Seven Sisters swoop gracefully along the British coast, attracting tourists and photographers. Yet for their serene appearance, the cliffs are not exactly safe.

Chalk is one of the softest minerals and easily broken—especially when it’s being constantly pounded by the sea. The site saw major landslides in 1999 and 2001, and a massive cliff-fall in May 2016 sent tons of rock into the water below. (“While we would encourage people to enjoy the beautiful coastline of East Sussex,” reads the Seven Sisters Country Park website, “we would remind visitors that you do have a duty of care and responsibility for your own safety.”)

Understanding coastal erosion has become a big issue in a world facing rising sea levels. The tricky part is studying something that, by definition, is no longer there. Even tons of fallen rock will break down and be scattered by the sea.

But the ghosts of the old coastline still haunt the rock that remains. To find them, geologists used a technique called cosmogenic nuclide dating, which measures the extent of cosmic radiation in rock to determine its age and how long it’s been exposed. This, in turn, can paint a picture of how that rock has moved or been changed over time.

The white cliffs are studded with pieces of hard, chemically inert flint—a rock that makes a far more reliable historical witness than soft chalk. Working perpendicular to the cliffs themselves, the researchers pulled chunks of flint from exposed rock in a line beginning at the cliff and ending near the water’s edge.

They crushed the flint into microscopic pieces, then put them through a cosmogenic nuclide array to determine their age and history.

Next, the researchers fed that data into a mathematical model of the coastline, which allowed them to estimate the cliffs’ rate of erosion going back thousands of years.

They found that the coast is indeed crumbling fast—but they also learned that this pace is a relatively recent development. For most of the cliffs’ history, the authors write in their paper, the rate of erosion held steady at between 2 and 6 centimeters per year. But that rate has accelerated mightily in the last few centuries, now cruising along at 22 to 32 centimeters a year.

What changed (or changed more)? The authors can’t say for sure. Natural climate change is one possibility; wave action did become more violent during the so-called Little Ice Age, which took place from the 14th to 19th centuries. The cliffs have also become more vulnerable over the last few centuries, as ocean currents and human engineers picked away at the band of sediment protecting the coast from the ocean’s full force.


November 7, 2016 – 3:01pm

Two Common Antibiotics Don’t Work the Way We Thought

Image credit: 

The secret lives of antibiotics are more interesting than we ever knew. Researchers analyzing two commonly prescribed drugs say these medications attack bacteria using never-before-seen techniques—a discovery that could help us develop better drugs in the future. The team published its findings in the Proceedings of the National Academy of Sciences.

Chloramphenicol (CHL) is an aggressive broad-spectrum antibiotic that’s been around since the 1940s. It’s injected intravenously to treat serious infections like meningitis, cholera, plague, and anthrax, but the risks of use are so extreme that it’s typically only used as a drug of last resort.

Linezolid (LZD) is both newer and gentler. It’s prescribed for common illnesses like pneumonia and strep, but has also proven itself against drug-resistant bacteria like the one that causes the staph infection MRSA.

Despite differences in their structure, the two drugs fight disease the same way many other antibiotics do: by sticking to the catalytic center of a bacterial cell and blocking its ability to synthesize proteins. Because other drugs are universal inhibitors—that is, they prevent any and all synthesis—scientists assumed CHL and LZD would be, too.

Researchers at the University of Illinois, Chicago were not content to assume. They wanted to know for sure what the two antibiotics were up to. They cultured colonies of E. coli bacteria, exposed them to strong doses of CHL and LZD, then sequenced the beleaguered bacteria’s genes to see what was going on inside.

As expected, CHL and LZD were all up on the bacteria’s ribosomes, frustrating its attempts to put proteins together. But the drugs weren’t as totalitarian as scientists had believed. Instead, their approach seemed both specific and context-dependent, switching targets based on which amino acids were present.

“These findings indicate that the nascent protein modulates the properties of the ribosomal catalytic center and affects binding of its ligands, including antibiotics,” co-author Nora Vazquez-Laslop said in a statement. In other words: It seems amino acids have a lot more influence than we realized.

As so often happens in science, finding these answers also raised a lot of questions (like “How many other antibiotics have we mischaracterized?”), but it also opens a door for medical science, said co-author Alexander Mankin.

“If you know how these inhibitors work, you can make better drugs and make them better tools for research. You can also use them more efficiently to treat human and animal diseases.”


November 4, 2016 – 2:30pm

For the Flu, It’s the Heat and the Humidity

Image credit: 
iStock

You know the drill: when the winter coat comes out, so do the pocket packs of tissues.* Cold weather and flu season are pretty much synonymous for most of us. Yet there are plenty of areas in the world that never get cold—and the flu still finds them anyway. Now researchers say changes in humidity could help explain why tropical regions still experience outbreaks of seasonal flu. They published their findings in the Proceedings of the National Academy of Sciences.

The flu virus (or viruses, really) is an unfussy traveler and can make itself at home in a number of different climates, but the forces underlying its seasonal cycles have been little understood. Previous studies have shown that both relative and absolute humidity can affect the rate at which droplets travel through the air and thus how fast the flu spreads, while others found that mammals tend to spread the virus faster in cold climates. But all of these studies were performed in laboratories, using guinea pigs and machines. No one could say whether their results would translate to the germ-filled real world.

Figuring that out would require a broad range in expertise, including climate science, epidemiology, preventive medicine, and bioengineering. So researchers at three California institutions formed a sort of interdisciplinary super team, which would allow them to combine both their know-how and their relevant data.

The team decided to use a technique called empirical dynamic modeling, or EDM, which is pretty much exactly what it sounds like: It combines real-world data with mathematical modeling to study complex, constantly fluctuating systems like our global climate or the ebb and flow of an ecosystem.

Their first dataset came from the World Health Organization’s Global Health Atlas: all worldwide records of laboratory-confirmed influenza A or B diagnoses from 1996 to 2014. Next they turned to the National Oceanic and Atmospheric Administration’s Global Surface Summary of the Day, which provided week-by-week records of temperature and absolute humidity for the same time period.

By feeding these data into an EDM representation of the planet, the team was able to get a zoomed-out perspective of the interplay between weather and the spread of disease. They found that it was not temperature that drove flu outbreaks, nor humidity—it was the combination of the two. In cold climates, the virus prefers low humidity and dry weather. But when temperatures rise, the flu luxuriates in damp, humid conditions like those in the tropics.

“The analysis allowed us to see what environmental factors were driving influenza,” Scripps Institution of Oceanography’s George Sugihara, a co-author of the study, said in a press statement. “We found that it wasn’t one factor by itself, but temperature and humidity together.”

These findings could have real implications in the global fight against the flu, the authors write. They suggest that setting up humidifiers in cold, dry places and dehumidifiers in the tropics could create environments so unfriendly to viruses that even the flu can’t stick around.

*Please consider this your reminder to get your flu shot.


November 4, 2016 – 9:30am

A Spectacular Supermoon Is on the Way

Image credit: 

NASA/Bill Ingalls // Public Domain

If you’re planning some kind of outdoor nocturnal mischief this November, we recommend you avoid the 14th, when a gargantuan supermoon will light up the night sky. If you’re not, we recommend that you get outside and enjoy it.

Supermoons aren’t rare—we typically get between four and six each year—but this month’s is extra special for a few reasons. It’s a full moon, for one, and will be orbiting extraordinarily close to Earth. We haven’t been this close to our lunar satellite since January 1948, and we won’t be again until November 2034.

The Moon’s proximity to Earth and its position relative to the Sun will create a jaw-dropping spectacle, with the Moon appearing up to 14 percent bigger and 30 percent brighter than normal full moons.

If you’re the early-to-bed type or expect to be stuck inside that evening, don’t fret: the supermoon’s peak will actually occur during morning rush hour at 8:52 a.m. EST. Plus, we’ll remind you the day before.

And while you’ve got your calendar out, take note: we’ll get another, albeit lesser, supermoon again on December 14.


November 3, 2016 – 5:30pm

The Science of Earworms (Lady Gaga, We’re Looking at You)

filed under: music, science
Image credit: 
iStock

You didn’t plan to have Katy Perry stuck in your head all day. It just happened, and now you’re a prisoner in your own treacherous, pop music–blasting mind. Never fear: We have answers. A study published today in the journal Psychology of Aesthetics, Creativity and the Arts [PDF] identifies the features that transform certain songs into earworms—and even offers tips for their extraction.

Scientists call this experience involuntary musical imagery, or INMI. Previous studies have suggested certain traits [PDF] that make a song ideal INMI fodder. First, it’s familiar; songs we’ve heard many times before are the ones most likely to jam in our brains. Second, it’s sing-able. So far, that’s really all we know. But researchers remain on the case.

In 2012, researchers in Finland and the UK conducted simultaneous surveys inviting their compatriots to complain about the songs that haunted them the most. The latter survey, called The Earwormery, amassed responses from 5989 disgruntled Brits. It was conducted by researchers from Goldsmiths, University of London, four of whom are co-authors on the current study.

For the current study, they pulled the responses of 3000 of those respondents and analyzed them for trends. They then identified 100 of the worst offenders and sorted them based on 83 different musical parameters, including length, melody, pitch range, and commercial success.

The songs most commonly found wiggling around in British brains had quite a few things in common. They were typically pretty fast pop songs, and their melodies were fairly generic, yet each one had a little something, like an unusual tonal interval or a repetition, that set it apart from others on the charts and made it stickier.

The top 9 list of wormiest tracks revealed a couple of other trends. See if you can spot them here:

1. “Bad Romance,” Lady Gaga

2. “Can’t Get You Out of My Head,” Kylie Minogue

3. “Don’t Stop Believing,” Journey

4. “Somebody That I Used to Know,” Gotye

5. “Moves Like Jagger,” Maroon 5

6. “California Gurls,” Katy Perry

7. “Bohemian Rhapsody,” Queen

8. “Alejandro,” Lady Gaga

9. “Poker Face,” Lady Gaga

Only one of those artists is even British—and three of them are Lady Gaga.

These results are specific to UK survey respondents, as are the musical qualities that inspired them. It’s probable that stickiness is cultural; what’s sticky in Mozambique may glide in one Japanese person’s ear and out the other, and vice versa.

The researchers say their research could be beneficial for those in music-related industries. “You can, to some extent, predict which songs are going to get stuck in people’s heads based on the song’s melodic content,” lead author Kelly Jakubowski, a music psychologist at Goldsmiths, University of London, said in a statement. “This could help aspiring songwriters or advertisers write a jingle everyone will remember for days or months afterwards.”

Still, we’re not completely helpless. The researchers offer three tips for extracting an earworm. First, just give in. Listening to the song the entire way through can help get it out of your head. Second, find a musical antidote. The British survey respondents listed “God Save the Queen” as the best way to shake an earworm, but we’d like to recommend James Brown’s “Sexmachine.” (Trust us. It works.)

Finally, stop worrying about it. Like a little splinter or an errant eyelash, that Lady Gaga will likely work its way out all on its own.


November 3, 2016 – 12:01pm

We Eat a Lot More When We’re Tired

Image credit: 
iStock

Love it or hate it, sleep is an essential (and substantial) part of your life. When we don’t get enough rest, we start to break down—and so do our eating habits. A new meta-analysis [PDF] published in the European Journal of Clinical Nutrition found that sleep-deprived people ate hundreds more calories per day than they did when they were well-rested.

Researchers at King’s College London pulled data from 11 different sleep and eating studies on a total of 172 people. All of the studies involved an experimental group, in which people were kept awake for part of the night, and a control group, whose participants were allowed to get the sleep they needed. The participants’ energy intake—that is, how much they ate—and output (any physical exertion) were then tracked for the next 24 hours.

Unsurprisingly, sleep-deprived people did not exercise more than the well-rested. But they did eat more, averaging 385 calories over their typical daily intake. They weren’t just any calories, either; participants specifically sought out foods high in fat and protein. Their carbohydrate intake did not change.

What was behind these snoozy munchies? The research team can’t say for sure. Previous studies point to two potential culprits: our brains and our hormones. One 2013 report found that the brains of sleep-deprived people responded more urgently to pictures of fattening food, inspiring cravings even when the participants were full. And even as their snack-lust peaked, the participants experienced a drop in activity in the region of the brain associated with careful decision-making. They really didn’t stand a chance.

Other experiments have found that sleep deprivation can lead to an imbalance in the so-called hunger hormones leptin and ghrelin, which can trick the body into believing that it’s starving.

The takeaway from the latest study, say its authors, is that weight gain is complicated. Diet and exercise are crucial factors, but they don’t operate in a vacuum.

“Reduced sleep is one of the most common and potentially modifiable health risks in today’s society in which chronic sleep loss is becoming more common,” senior author Gerda Pot said in a statement. “More research is needed to investigate the importance of long-term, partial sleep deprivation as a risk factor for obesity and whether sleep extension could play a role in obesity prevention.”


November 2, 2016 – 5:00pm

Coral and Algae Have Been Friends for 212 Million Years

Image credit: 
Nick Hobgood via Wikimedia Commons // CC BY-SA 3.0

Some classics were just made to go together. Peanut butter and chocolate. Thanksgiving dinner and stretch pants. Scleractinian corals and dinoflagellate algae. And boy, do those two go way back—scientists looking at fossils say the two have been cohabiting since at least about 212 million years ago, in the Late Triassic Period. The researchers published their report today in the journal Science Advances.

Happy, healthy coral is essential for a happy, healthy reef. To stay happy and healthy, many modern corals have forged super close relationships with teeny algae called zooxanthellae. The corals give the algae a safe place to live and the chemical components for photosynthesis, while the algae make oxygen, keep the water clean, and produce all kinds of helpful nutrients. The pair really have a good thing going.

But just how long it’s been going on has been anybody’s guess. Previous studies on the pair’s relationship have been largely speculative, using data from modern-day corals to imagine their ancestors’ world.

Now, two new scientific techniques, one visual and one chemical, have allowed us to get a far more accurate picture of coral history.

Earlier this year, lead author Katarzyna Frankowiak and a number of her co-authors reported that they’d figured out how to tell if a fossilized hard coral had been in a relationship with algae. The trick, they said, is to look very closely at the coral’s skeleton to see how it had grown and aged. Even when the algae itself was long gone, its presence had left irrevocable (if microscopic) changes in the coral’s life.

For the new study, the researchers applied this technique to tiny samples of fossilized hard corals found near the former Tethys Sea in Turkey. They used a variety of high-powered microscopes to examine the fossils in the most minute of detail and found that the skeletons of these ancient, ancient samples looked a lot like those of modern symbiotic hard corals.

Algae activity (brown dots in the tissue, upper left image) is recorded in the coral skeleton as structural (growth bands; upper right image) and geochemical signatures. Such regular growth bands occur in Upper Triassic (ca. 220 Ma) scleractinian corals (lower images) as well. Image Credit: Isabelle Domart-Coulon (upper left), Jarosław Stolarski (upper right, and lower images)

The second new method concerned the corals’ chemical composition. The experience of living with algae alters a coral’s very molecules, changing the ratio of various oxygen, carbon, and nitrogen isotopes. And just as with the visual inspection, analysis of the fossil corals’ isotopes suggested that they’d been sharing their lives with zooxanthellae.

Analyzing the coral isotopes yielded another insight: the sea in which these buddies lived was likely in pretty poor condition. The fossil corals shared a similar ratio of nitrogen isotopes with modern symbiotic Bermuda corals, which are currently struggling in nutrient-starved waters. It’s possible, the researchers say, that these difficult conditions were what inspired the algae and the corals to band together in the first place.


November 2, 2016 – 2:30pm

How Chipmunks and Striped Mice Got Their Adorable Stripes

Image credit: 

Oleksii Voronin via Wikimedia Commons // CC BY-SA 3.0

Scientists say they’ve found the genetic origin of stripes in chipmunks and other mice. They published their findings today in the journal Nature.

Cute though they may be, rodents’ stripes are hardly ornamental. Like a jaguar’s rosettes or a peppered moth’s sooty wings, stripes evolved to allow their bearers to vanish into their surroundings. On a large scale, we understand how these patterns came about: animals with camouflage markings survived and bred, while those without died out. On a smaller scale, we’ve still got a lot to learn.

To zoom in on the specific genetics of mammal stripes, an international team of scientists decided to take a very close look at the four-striped grass mouse (Rhabdomys pumilio), a resilient little rodent that spends its days munching seeds in southern Africa.

J. F. Broekhuis

The scientists first examined the individual hairs that form each mouse’s stripes. They found three distinct types: light hairs, with black bases and unpigmented hair shafts; black hairs, which were dark from base to tip; and banded hairs, with black bases and yellow shafts. All three hair types were found in both dark and light stripes, albeit in different proportions: dark stripes simply had a lot more black hairs, while light stripes were mostly light hairs.

Next, they bred baby grass mice in the lab, tracking the appearance of their skin and fur as they grew from embryos to pups. They found that just 19 days after fertilization, the length of the rodents’ fur began to vary over the areas that would one day be striped. Three days later, the embryos’ skin started to lighten in the same places that light-striped fur would later appear. At birth, the mouse pups’ coats showed variation in both hair length and skin color. Two days after that, their characteristic stripes were clearly visible.

To understand what was causing these shifts, the researchers scanned the rodents’ genomes at all four points in development. They found that, as early as day 19 of embryonic development, a gene called ALX3 was showing up on the embryos’ backs in the same sites where the light stripes would one day appear. 

The researchers learned that ALX3 was kind of a bully to a pigment cell–producing protein called microphthalmia-associated transcription factor (MITF). Wherever ALX3 appeared, pigment production was repressed, leading to very pale cells, which in turn led to light stripes.

Furthermore, the team found that the same mechanism—ALX3 smothering MITF activity—appears in similarly striped Eastern chipmunks (Tamias striatus). While mice and chipmunks are both members of the rodent family, their last common ancestor lived around 70 million years ago. The fact that two such distinct species share a similar stripe backstory suggests to the researchers that this useful genetic trick may have evolved a number of times across the mammal family tree—a phenomenon known as convergent evolution.


November 2, 2016 – 2:15pm

How One of the Victorian Era’s Most Famous Actors Became Bram Stoker’s Dracula

Bram Stoker (L) and Sir Henry Irving (R). Image Credits: Unidentified photographer, public domain; Lock and Whitfield, public domain

 
The role of Count Dracula was not one that Henry Irving wanted. More than a century ago, the actor refused the part in a staged reading of Bram Stoker’s exciting new novel, released in 1897. Yet Irving would never entirely shake the specter of the intense, sensual vampire—a character that scholars say he himself inspired.

Abraham “Bram” Stoker grew up in Ireland in the mid-1800s. A sickly child, he spent many days and nights in bed while his mother Charlotte filled his ears with tales of monsters and ghouls, disease and death. But Stoker grew healthier as he got older, and by the time he left home for university, he was a hale, red-haired giant. Bram had become a jock, but a well-read jock, exchanging doting and passionate letters with his idol Walt Whitman.

After college, Bram followed in his father’s footsteps and entered civil service. He might have stayed there, too, were it not for the lure of the theater. So eager was Stoker to immerse himself in Dublin’s dramatic scene that he began volunteering at night as a theater critic for the Dublin Evening Mail—despite the fact that the paper already had paid staff writing reviews.

It was in his capacity as a critic that Stoker first encountered Henry Irving in 1877. The actor was playing the lead role in Hamlet—a well-worn part by any measure, yet Stoker felt that Irving brought a depth and freshness to the performance that had never been seen before.

Henry Irving as Hamlet, from a painting by Sir Edwin Long. Image Credit: Public Domain

 
Stoker was instantly enchanted. He returned to see a second performance, and then a third, writing a new review each time. Intrigued by the attention, Irving invited an ecstatic Stoker to a dinner party.

An after-meal recitation by Irving cemented the night in Stoker’s mind forever. Even in a dining room the imposing actor commanded his audience with almost mesmeric power. “Outwardly I was as of stone …” Stoker wrote years later in his book Personal Reminiscences of Henry Irving. “The whole thing was new, re-created by a force of passion which was like a new power.” When the poem concluded, Stoker “burst out into something like a fit of hysterics.”

That night, he wrote, “began the close friendship between us which only terminated with his life—if indeed friendship, like any other form of love, can ever terminate.”

Irving was flattered by the younger man’s avid attention and enjoyed his company. The two began spending more and more time together, sometimes talking until sunrise. Irving offered Stoker a job as his business manager. Stoker quit his office job (much to his parents’ chagrin) and gave himself over to a life in the theater.

It was a good fit: Stoker was a thoroughly educated man and a gifted manager with a head for figures. Irving’s theater, the Lyceum, blossomed under Stoker’s careful and devoted attention. Yet despite his talents and hard work, which kept him away from his wife and child for days, even months at a time (Bram married Florence Balcombe in 1878; the two welcomed their son Irving—ahem—one year later), Stoker never sought attention or acclaim.

Even if he had, he likely would not have had much luck. Someone once asked Irving if he had a college degree. “No,” he drawled, “but I have a secretary who has two.” The “secretary” he spoke of so dismissively was Stoker.

This seemingly symbiotic relationship—Irving as vainglorious master, Stoker the humble servant—went on for decades. “Being anywhere with Irving was contentment for Stoker,” historian Barbara Belford wrote in her 1996 book Bram Stoker: A Biography of the Author of Dracula.

Irving in character. Image Credit: Public Domain

But trouble comes to us all, even the happiest pairs. Stoker had continued to write, scribbling on scraps of paper in the scarce moments he wasn’t working or spending with Irving. (The relationships between Stoker and his wife, and between Irving and his, had long since grown cold). In 1897, those scraps became a book.

Dracula told the story of a naïve young middle-class man held prisoner by a powerful, sensual count.

“His face was a strong—a very strong—aquiline,” protagonist Jonathan Harker wrote in his fictional journal, “with high bridge of the thin nose and peculiarly arched nostrils, with lofty domed forehead, and hair growing scantily round the temples but profusely elsewhere. His eyebrows were very massive, almost meeting over the nose, and with bushy hair that seemed to curl in its own profusion.”

As Harker came to learn, the vampire Count Dracula would never see his own reflection. But Irving might have. “Somewhere in [Stoker’s] creative process,” Belford writes, “Dracula became a sinister caricature of Irving as mesmerist and depleter, an artist draining those about him to feed his ego. It was a stunning but avenging tribute.”

Irving may have been the most obvious, immediate inspiration for Stoker’s count, but he was not the only one. Many elements of Dracula’s past were lifted wholesale from history and legends surrounding Vlad the Impaler. Some scholars argue the dramatic, articulate count represented a monstrous version of Stoker’s sometimes-friend Oscar Wilde, whose public trial and shunning took place just one year before the novel was written. And there may have been a variety of other inspirations for Stoker’s tale. Yet Belford, and other scholars, believe much of Dracula’s looks and character were based on Irving [PDF].

In order to protect theatrical rights to his novel, Stoker quickly shaped it into a script and organized a staged reading at the Lyceum, offering the lead role to the theater’s leading man—by then one of the most famous actors in the Victorian era. Irving turned it down. Instead, he watched dolefully from the audience as someone else brought the vampire to life. The reading ended. Irving retreated.

A nervous Stoker found the actor in his dressing room. “How did you like it?” he asked.

“Dreadful,” Irving said.

Two years later, Irving sold the Lyceum out from under Stoker’s nose.

Six years after that, Irving died. But Stoker never forgot their fateful first meeting the night of the dinner party. “So great was the magnetism of his genius, so profound was the sense of his dominancy,” Stoker wrote, “that I sat spellbound.”


October 31, 2016 – 12:30pm

Is Daylight Saving Time to Blame for Seasonal Depression?

Image credit: 
iStock

The precise root cause of seasonal depression has eluded scientists for years. Now researchers think they’ve found the answer: daylight saving time. They published their report in the journal Epidemiology.

Seasonal depression or seasonal affective disorder (SAD) affects around 1.6 billion people across the globe. Its symptoms mirror those of generalized depression; what differentiates SAD is the timing of its onset, which coincides with winter’s shorter days and long, dark nights.

We know that sunlight, or the absence of it, has a powerful effect on our bodies. But scientists have yet to find a definitive physiological link between darkness and SAD, a fact that makes some wonder if there aren’t other variables at play.

Previous studies have found a relationship between the shift from daylight saving into standard time and other health problems, but they had not looked specifically at the transition’s effect on depression. To get a better idea, an international team of researchers looked at Danish hospital intake records from 1995 to 2012, including 185,419 diagnoses of depression.

As expected, they saw an increase in hospital admissions for depression as winter descended. But that increase spiked at one particular time: the month immediately following the changing of clocks.

The researchers controlled for variables like day length and weather, which they say confirms that the 8 percent rise in depression diagnoses was not a coincidence.

And while their study focused on people with severe depression, the authors say the time shift likely affects “the entire spectrum of severity.”

Though the study did not identify the mechanism responsible for time change–related depression, the researchers believe it may have something to do with the way daylight saving manipulates our hours of light and dark. Danish daylight saving protocol steals an hour of daylight from the afternoon and moves it to the early morning—a time, the authors say, when most people are indoors anyway.

“We probably benefit less from the daylight in the morning between seven and eight, because many of us are either in the shower, eating breakfast or sitting in a car or bus on the way to work or school. When we get home and have spare time in the afternoon, it is already dark,” co-author Søren D. Østergaard of Aarhus University Hospital said in a statement.

Then there are the psychological effects. In changing the clocks, we are forced to acknowledge the arrival of months of darkness, a realization that Østergaard says “is likely to be associated with a negative psychological effect.”

Fortunately, while we still don’t fully understand the causes of SAD, we have found effective treatments. If you find yourself depressed as the year winds down, talk to your doctor and look into a therapeutic light box.

Know of something you think we should cover? Email us at tips@mentalfloss.com.


October 31, 2016 – 10:30am