A technology called functional near-infrared spectroscopy (fNIRS) can be used to objectively measure tinnitus, or ringing in the ears, according to a new study published November 18 in the open-access journal PLoS ONE by Mehrnaz Shoushtarian of The Bionics Institute, Australia, and colleagues. A summary of the study was published on the Science Daily website.
Tinnitus, the perception of a high-pitched ringing or buzzing in the ears, affects up to 20% of adults and, when severe, is associated with depression, cognitive dysfunction, and stress. Despite its wide prevalence, there has been no clinically used, objective way to determine the presence or severity of tinnitus.
In the new study, researchers turned to fNIRS, a non-invasive and non-radioactive imaging method which measures changes in blood oxygen levels within brain tissue. The team used fNIRS to track activity in areas of the brain’s cortex previously linked to tinnitus. They collected fNIRS data in the resting state and in response to auditory and visual stimuli in 25 people with chronic tinnitus and 21 controls matched for age and hearing loss. Participants also rated the severity of their tinnitus using the Tinnitus Handicap Inventory.
fNIRS revealed a statistically significant difference in the connectivity between areas of the brain in people with and without tinnitus. Moreover, the brain’s response to both visual and auditory stimuli was dampened among patients with tinnitus. When a machine learning approach was applied to the data, a program could differentiate patients with slight/mild tinnitus from those with moderate/severe tinnitus with an 87.32% accuracy. The authors conclude that fNIRS may be a feasible way to objectively assess tinnitus to assess new treatments or monitor the effectiveness of a patient’s treatment program.
The authors add: “Much like the sensation itself, how severe an individual’s tinnitus is has previously only been known to the person experiencing the condition. We have combined machine learning and non-invasive brain imaging to quantify the severity of tinnitus. Our ability to track the complex changes that tinnitus triggers in a sufferer’s brain is critical for the development of new treatments.”
Newly formed cochlear hair cells contain intricate hair bundles with many stereocilia (critical for sensing sound) and other components that are critical for proper function and neural transmission. Credit: Will McLean
An approach to regenerate inner ear sensory hair cells reportedly lays the groundwork for treating chronic noise-induced hearing loss by the company, Frequency Therapeutics, Woburn, Mass, and its co-founders who are drawing on research from Brigham and Women’s Hospital (BWH), Harvard Medical School, Mass Eye and Ear Infirmary, and Massachusetts Institute of Technology (MIT). In the February 21, 2017 edition of Cell Reports, the scientists describe a technique to grow large quantities of inner ear progenitor cells that convert into hair cells. The same techniques are said to show the ability to regenerate hair cells in the cochlea.
Hearing loss affects 360 million people worldwide according to the World Health Organization (WHO). Inner ear hair cells are responsible for detecting sound and helping to signal it to the brain. Loud sounds and toxic drugs can lead to death of the hair cells, which do not regenerate. Humans are born with only 15,000 sensory hair cells in each cochlea, which are susceptible to damage from exposure to loud noises and medications—leading to cell death and hearing loss over time.
According to a press release from Frequency Therapeutics, sufficient numbers of mammalian cochlear hair cells have not been able to be obtained to facilitate the development of therapeutic approaches for hearing loss. The new research built on previous work to control the growth of intestinal stem cells expressing the protein Lgr5 and targeted a different population of Lgr5 cells that were discovered to be the source of sensory hair cells in the cochlea during development (a subset of supporting cells or progenitors). The team successfully identified a protocol of small molecules to efficiently grow the cochlear progenitor cells into large colonies with a high capacity for differentiation into bona fide hair cells.
Jeff Karp, PhD
“The ability to regenerate hair cells within the inner ear already exists in nature,” said Jeff Karp, PhD, of BWH and Harvard Medical School in the press release. “Birds and amphibians are able to regenerate these cells throughout their lives, which provided the base for our inspiration to find similar pathways in mammals. With our collaborators at Mass Eye and Ear Infirmary, we were able to study a small molecule approach, that we developed at MIT and BWH, to expand progenitor cells from the mouse cochlea. We believe this technique represents a major advance for hearing loss research and will enable new physiological studies as well as genetic screens using drugs, siRNA, or gene overexpression.”
The research team first focused on optimizing the expansion of Lgr5 expressing cochlear progenitor cells. With the combination of a GSK3 inhibitor to activate the Wnt signaling pathway and a histone deacetylase (HDAC) inhibitor to activate gene transcription, the research team achieved a greater than 2000-fold expansion of cochlear supporting cells compared to previous approaches. This protocol was used successfully and with consistency to generate colonies of neonatal and adult murine cells, as well as primate and human progenitor cells. Furthermore, according to the researchers, the team achieved 60-fold enhancement of hair cell production from the progenitor cells compared to current methods.
The generation of new hair cells was achieved even in cochlear tissue that had been depleted of hair cells by exposure to an ototoxic antibiotic. Importantly, hair cells produced from the protocols exhibited the same physical features, gene expression, and functionality as typical cochlear hair cells, says Frequency Therapeutics.
“This work has opened an entire field of what we call Progenitor Cell Activation (PCA), which we believe has many regenerative applications beyond hearing loss, ranging from skin-related diseases and ocular ailments to gastrointestinal diseases and diabetes,” said Will McLean, PhD, co-founder and VP, Biology and Regenerative Medicine, at Frequency Therapeutics, and the lead author of the paper. “Furthermore, the approach creates a platform with potential to explore large populations of previously difficult-to-access progenitor cell types. Drug discovery for the inner ear was limited by the inability acquire enough primary cells to explore drug targets. This approach unlocks that ability for hearing research and a variety of other fields.”
“By using Progenitor Cell Activation to restore healthy tissue within the inner ear, we’re harnessing the body’s innate ability to heal itself,” said David Lucchino, co-founder, president and CEO of Frequency Therapeutics. “Frequency’s development of a disease modifying therapeutic that can be administered with a simple injection could have a profound effect on chronic noise-induced hearing loss, our lead indication, and we are rapidly advancing this program into human clinical trials within the next 18 months,” added Chris Loose, PhD, co-founder and CSO of the company.
Frequency Therapeutics was founded to translate what the company describes as breakthrough work in Progenitor Cell Activation (PCA) by its scientific founders, Robert Langer, ScD, and Jeff Karp, PhD, into new treatments where controlled tissue regeneration with locally delivered drugs could have profound therapeutic potential. The company has licensed foundational patents from the MIT and Partners Healthcare.
Hearing Review has published several articles on work involving Lrg5, including work involving a co-author of this study, Albert Edge, PhD, and related work on blocking the notch pathway.
Sources: Frequency Therapeutics; Brigham and Women’s Hospital; Cell Reports
https://honiton-hearing.co.uk/wp-content/uploads/2020/12/Cochlear-hair-cells.png6401280adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-12-14 17:23:412020-12-14 17:23:41Study Shows Hair Cell Regrowth with New Drug
Zoom Charges Monthly Fee for Closed Captioning During Pandemic, ‘WBFO’ Reports
The challenges for hearing impaired people working remotely and utilizing video conferencing services during the coronavirus pandemic can make communication difficult. According to an article on the WBFO/NPR website, hearing advocate and Living With Hearing Loss founder Shari Eberts recently wrote an open letter—that turned into a petition with 58,000 signatures—asking video conferencing companies to remove the paywall from their captioning services.
According to the article, both Google and Microsoft have complied, but Zoom is still charging a $200 monthly fee for users to be able to access closed captioning.
Issues with video conferencing that include poor audio and/or sound quality as well as spotty internet connection, can make lip reading difficult. Even when using workarounds like speaker mode to be able to see a larger version of the person they’re speaking with and/or headphones to improve sound quality, a person’s lips can be out of sync with their words, Eberts says in the article. Closed captions could improve communication in these situations, she says.
“It’s hard for us to want to jump in or to share our thoughts because we’re not sure what’s been said. And obviously, there’s a lot of trepidation about looking silly or repeating something that someone just said,” Eberts is quoted in the article as saying.
To read the article in its entirety, please click here.
Source: WBFO
https://honiton-hearing.co.uk/wp-content/uploads/2020/06/Apple-earpods.jpg14001400adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-12-04 19:35:102020-12-04 19:35:10Zoom Charges Monthly Fee for Closed Captioning
Whisper may have a quiet name, but it could reverberate loudly in the hearing healthcare industry. The company launched its first new hearing aid on October 15—a product that really is significantly different from all others dispensed by audiologists and hearing aid specialists. And, yes, that’s right: the Whisper Hearing System is designed for dispensing by hearing care professionals. As such, Whisper represents the first new major hearing aid manufacturer with a product specifically designed for dispensing since the InSound Medical XT was approved by the FDA in 2003 (later purchased in 2010 by Sonova and renamed Lyric).
The Whisper RIC hearing aids and brain.
And a bit like Lyric, Whisper will use a subscription payment model for consumers. The leasing concept is gaining ground in hearing healthcare, in part due to the fact that technology moves so fast, hearing aids can be expensive, and frequent product upgrades are now a given in the industry. Whisper will be available via a comprehensive monthly plan that includes ongoing care from a local hearing care professional, a lease of the Whisper Hearing System, regular software upgrades, and a 3-year warranty that not only covers the system itself but also loss and damage. The company is offering a special introductory rate of $139/month (regularly $179/month) for a 3-year term.
The New Whisper Hearing System
The Whisper Hearing System essentially has three components:
A hearing aid processor that resembles an advanced receiver-in-the-canal (RIC) hearing aid;
The Whisper Brain is a small device that runs an AI-driven Sound Separation Engine to optimize sound in real time. It also enables connectivity to iPhones, and
A phone app that provides an interface for the consumer.
The Whisper team, which is largely composed of executives from the AI field, created the Whisper brain as a dedicated, powerful sound processing system that also allows for updates and other capabilities—instead of relying on the wearer’s smartphone for many of these functions. “We developed the Whisper Brain to run the core technology we’ve developed for hearing,” said company Co-founder and President Andrew Song in an interview with Hearing Review. “Think about your smartphone and all the processing inside it. We’re using the Whisper Brain to apply this type of processing to hearing without having to compete with smartphone games or applications. The Whisper Brain is a dedicated processor designed to provide the best hearing.”
However, the Whisper Brain isn’t required to use the hearing aid, as there may be situations where the wearer wants to step away from it or not take it with them. In those situations, the hearing aid uses the “onboard” hearing aid algorithms in the RIC (similar to other advanced hearing aids when unpaired to the user’s cell phone).
Wireless connectivity with iPhones is also provided through the Whisper Brain via Bluetooth, and the company says it may support other phones and has plans to expand on this in the future. The RICs use a size 675 battery with an expected use of 4-5 days with typical use including streaming, and the WhisperBrain has a USB port for recharging.
Not Your Grandfather’s Hearing Aid
Andrew Song
According to Song, Whisper started about 3 years ago in San Francisco when he began discussions with another Whisper co-founder, Dwight Crow, the company’s CEO. Song is the former head of products for an online instant-messaging (IM) system most of us are familiar with: Facebook Messenger Core. A mathematics and computer science graduate of the University of Waterloo, he is an expert in artificial intelligence and a member of Sequoia Capital’s Scout Program which was formed to discover and develop promising companies. Crow is the founder of Carsabi, a machine-learning based car sales aggregator acquired by Facebook in 2012, and he helped build the e-commerce segment at Facebook which yields over $1 billion per quarter in revenue. A third co-founder, Shlomo Zippel, was the applications team leader at PrimeSense which built the 3D sensor technology behind Microsoft Kinect.
Jim Kothe
The company then added as head of sales Jim Kothe, an audiologist and hearing industry veteran who has a wealth of experience within both the dispensing community and manufacturing, in addition to an extremely impressive team of executives with experience and leadership roles at companies like Facebook, Nest, Google, Invisalign, Johnson & Johnson, Solta Medical, and Cutera. Together they are collaborating on a product that blends artificial intelligence, hearing care, hardware, and software for helping solve the challenge of providing better hearing.
“I think for me, and probably for everyone at the company, it’s a very personal mission,” says Song. “Personally, the starting point is really my grandfather. He has hearing loss and is not an uncommon story when you work in this business: I’d say that he’s a hearing aid owner, but not a hearing aid wearer.”
This set into motion Song’s investigation into what hearing aid technology was doing, what experiences people were having with it, and why his grandfather had the complaints he did. “That really opened my world to all the exciting things that could be done, but also the opportunity we have for how we can really build a product to help [people like him],” says Song. “Since then we’ve been putting the product together and bringing the expertise that comes from hearing folks like Jim and the others on our team—and blending it with the kind of product and technology ideas we almost take for granted here in Silicon Valley. Products are becoming more consumer friendly, more consumer oriented, and we’re building some of those ideas into a new type of hearing aid product. So, while Whisper is a hearing aid regulated by the FDA, all of these things influenced our approach, our mentality, and our vision towards this space, and we think our approach is a little different [from those of other hearing aid manufacturers].”
The larger capacity for processing power is extremely exciting for Song and his colleagues, and he likens this advancement to the leap from analog to digital hearing technology.
The larger capacity for processing power is extremely exciting for Song and his colleagues, and he likens this advancement to the leap from analog to digital hearing technology. He says some great hearing aid algorithms have been, and will continue to be, created that will result in substantially improved hearing. However, there’s little point in having these algorithms if they can’t be fully employed in a wearable device.
He also says the problem in hearing aids is much more complex than, for example, those solutions found in noise-cancelling headphones. “Over time, [we’ve had] very ambitious people with a lot of ideas on what we should do with this powerful processing. What’s really exciting is not just having this technology, but also having a learning platform to be able to develop it. I think one of the most interesting parts of development is that the goal, at the end of the day, really isn’t about perfect noise removal. You need noise in your life. We have demos we can run that more or less perfectly remove noise…and it just creates sort of a weird environment. So, I think in many cases, the unique aspect of what we’re doing revolves around how do we use [the research] and how do we invent some truly novel ideas? Obviously, it’s not only about noise removal, but how we can use the powerful processing specifically in these hearing aids to make hearing aids really good for the purpose of listening. That subtlety is where we feel like we can really differentiate ourselves and truly make a difference in people’s lives.”
A System that Relies on Professional Care
Song says there has been a patient-centric approach at every turn in the design, development, marketing, and especially distribution of the Whisper Hearing System. And it starts with the hearing care professional’s expertise.
“I think there’s several very important things along that path; the first of which was to work with hearing care professionals who are the ‘artists’ in delivering great care,” Song told HR via a Zoom interview. “If I look at my grandfather’s experience, it was pretty obvious to me that having the right professionals made a huge difference. And so you can talk about using Zoom or you can talk about going direct to consumer, but it’s very, very obvious—even as a Silicon Valley engineer—that the audiologist is extremely important in the process. That’s why we made a decision very early on that we’d be working with professionals. And if you remember, when the company started in 2017, that’s when the OTC laws were getting passed. That’s where all the ‘cool stuff’ was supposed to be. Everyone was saying, ‘Get rid of these professionals!’ …But there’s a care-oriented mindset in hearing healthcare. You can see that there’s a personal aspect [needed] to evaluate what would be good for my grandfather. And when you talk to patients and you talk to audiologists, this becomes very clear. So, I think that was a very early decision that’s not necessarily about the product, per se, but about our business and how we best deliver the hearing system.”
One of the things Whisper also wants to address is the post-purchase feeling of regret that can accompany a high-end, high-technology purchase. As with any car, computer, or consumer electronics device, when a consumer purchases an expensive top-of-the-line hearing aid, there is doubtlessly a more advanced model with new processing capabilities and features that will be launched 6 months later. But, with hearing loss, Song believes that sense of regret can be magnified because hearing is such a personal, important 24/7 activity.
The Whisper Hearing Aid Brain
That led to the idea of a subscription-based system using a machine-learning platform that can be upgraded on regular intervals without continually replacing the actual hearing aid or brain itself. “The nature of our product is that it gets better over time. You don’t need to pay for [the upgrades]; the hearing aid learns on its own, and we’ll also deliver you a software upgrade every few months. [It’s] similar to how you might think of a cell phone plan…Fundamentally, that’s really what we’re trying to offer.”
It’s also important that professionals have the margins and revenues to be able to cover their expenses in order to provide exceptional hearing care, says Song. Whisper plans to provide upfront fees and work with professionals, while offering patients a better way to pay for the product, support, and systems that the company has developed. Currently, a select number of hearing care professionals are using the Whisper Hearing System, and the company is now expanding from this base of dispensing offices.
When asked how he thinks Whisper will change the hearing aid market, Song quickly replied, “I really hope that everybody around the world gets an upgradable hearing aid in the next 5 years. And, of course, I hope it’s ours. We have a lot to offer. But if the market moves toward Whisper in 5 years, then we’re competing with everybody to make the best upgrades. Frankly, I think that’s a big win for the industry. And it’s also a big win for my grandfather, right? I think, as part of that vision, we have to be really mindful about how much we bite off in any of our product development. So this first product represents a first step, especially on the device with this kind of learning capability and working with professionals on this payment model—all of the new things that we’ve already talked about. But there are other aspects around this kind of patient-centric, consumer-centric model with the professional and I think there’s a lot of interactivity that we can build on. There’s a lot of new ideas we have about how to better integrate everything together. And so, more and more, we’ll be able to build that out and address those issues because we’ll have an excellent learning hearing aid on the market.”
Funding for Whisper
The initial investment to establish the company came from Sequoia Capital and First Round Capital, and on Thursday (October 15) Whisper announced the close of a $35 million Series B funding round led by Quiet Capital for total funding of $53 million. Advisors for the company include Mike Vernal of Sequoia and former VP of engineering at Facebook; audiologist Robert Sweetow who is the former UCSF Director of Audiology; Lee Linden of Quiet Capital and founder of TapJoy and Karma; Rob Hayes of First Round which also invested in Uber and Square, and Stewart Bowers, former VP of engineering at Tesla who was responsible for AutoPilot.
“Software-defined hearing technology is the future,” said Vernal in a press statement. “By building the Whisper Hearing System around software, the Whisper team will be able to improve patient care with a device that adapts, upgrades, and improves continuously for the wearer’s benefit. This is the start of a new paradigm for delivering hearing technology, and we’re thrilled to partner with Whisper on this journey.”
“What I look for in a company is the team,” said Hayes. “The Whisper team combines incredible expertise in cutting edge artificial intelligence, software, and hardware with a genuine passion for helping people. I’m excited to work with them to transform the hearing space.”
Hearing Speech Requires Quiet—In More Ways than One
A very interesting paper by:
Kim Krieger, Research Writer, University of Connecticut
Perceiving speech requires quieting certain types of brain cells, report a team of researchers from UConn Health and University of Rochester in an upcoming issue of the Journal of Neurophysiology. Their research reveals a previously unknown population of brain cells, and opens up a new way of understanding how the brain hears, according to an article on theUConn Today website.
Your brain is never silent. Brain cells, known as neurons, constantly chatter. When a neuron gets excited, it fires up and chatters louder. Following the analogy further, a neuron at maximum excitement could be said to shout. When a friend says your name, your ears signal cells in the middle of the brain. Those cells are attuned to something called the amplitude modulation frequency. That’s the frequency at which the amplitude, or volume, of the sound changes over time.
Amplitude modulation is very important to human speech. It carries a lot of the meaning. If the amplitude modulation patterns are muffled, speech becomes much harder to understand. Researchers have known there are groups of neurons keenly attuned to specific frequency ranges of amplitude modulation; such a group of neurons might focus on sounds with amplitude modulation frequencies around 32 Hertz (Hz), or 64 Hz, or 128 Hz, or some other frequencies within the range of human hearing. But many previous studies of the brain had shown that populations of neurons exposed to specific amplitude modulated sounds would get excited in seemingly disorganised patterns. The responses could seem like a raucous jumble, not the organized and predictable patterns you would expect if the theory, of specific neurons attuned to specific amplitude modulation frequencies, was the whole story.
UConn Health neuroscientists Duck O. Kim and Shigeyuki Kuwada passionately wanted to figure out the real story. Kuwada had made many contributions to science’s understanding of binaural (two-eared) hearing, beginning in the 1970s. Binaural hearing is essential to how we localise where a sound is coming from. Kuwada (or Shig, as his colleagues called him) and Kim, both professors in the School of Medicine, began collaborating in 2005 on how neural processing of amplitude modulation influences the way we recognise speech. They had a lot of experience studying individual neurons in the brain, and, together with Laurel Carney at the University of Rochester, they came up with an ambitious plan: they would systematically probe how every single neuron in a specific part of the brain reacted to a certain sound when that sound was amplitude modulated, and when it was not. They studied isolated single-neuron responses of 105 neurons in the inferior colliculus (a part of the brainstem) and 30 neurons in the medial geniculate body (a part of the thalamus) of rabbits. The study took them two hours a day, every day, over a period of years to get the data they needed.
While they were writing up their results, Shig became ill with cancer. But still he persisted in the research. And after years of painstaking measurement, all three of the researchers were amazed at the results of their analysis: there was a hitherto unknown population of neurons that did the exact opposite of what the conventional wisdom predicted. Instead of getting excited when they heard certain amplitude modulated frequencies, they quieted down. The more the sound was amplitude modulated in a specific modulation frequency, the quieter they got.
It was particularly intriguing because the visual system of the brain has long been understood to operate in a similar way. One population of visual neurons (called the “ON” neurons) gets excited by certain visual stimuli while, at the same time, another population of neurons (called the “OFF” neurons) gets suppressed.
Last year, when Shig was dying, Kim made him a promise.
“In the final days of Shig, I indicated to him and his family that I will put my full effort toward having our joint research results published. I feel relieved now that it is accomplished,” Kim says. The new findings could be particularly helpful for people who have lost their ability to hear and understand spoken words. If they can be offered therapy with an implant that stimulates brain cells directly, it could try to match the natural behavior of the hearing brain.
“It should not excite every neuron; it should try to match how the brain responds to sounds, with some neurons excited and others suppressed,” Kim says.
The research was funding by the National Institutes of Health.
Original Paper: Kim DO, Carney LH, Kuwada S. Amplitude modulation transfer functions reveal opposing populations within both the inferior colliculus and medial geniculate body. Journal of Neurophysiology. 2020. DOI: https://doi.org/10.1152/jn.00279.2020.
Researchers Explain Link Between Hearing Loss & Dementia
Hearing loss has been shown to be linked to dementia in epidemiological studies and may be responsible for a tenth of the 47 million cases worldwide.
Now, published in the journal Neuron, a team at Newcastle University provide a new theory to explain how a disorder of the ear can lead to Alzheimer’s disease—a concept never looked at before. An article summarising the results of the research appears on the University’s website.
It is hoped that this new understanding may be a significant step towards advancing research into Alzheimer’s disease and how to prevent the illness for future generations.
Key Considerations
Newcastle experts considered three key aspects; a common underlying cause for hearing loss and dementia; lack of sound-related input leading to brain shrinking; and cognitive impairment resulting in people having to engage more brain resources to compensate for hearing loss, which then become unavailable for other tasks.
The team propose a new angle which focuses on the memory centers deep in the temporal lobe. Their recent work indicates that this part of the brain, typically associated with long-term memory for places and events, is also involved in short-term storage and manipulation of auditory information.
They consider explanations for how changes in brain activity due to hearing loss might directly promote the presence of abnormal proteins that cause Alzheimer’s disease, therefore triggering the disease.
Professor Tim Griffiths, from Newcastle University’s Faculty of Medical Sciences, said: “The challenge has been to explain how a disorder of the ear can lead to a degenerative problem in the brain.
“We suggest a new theory based on how we use what is generally considered to be the memory system in the brain when we have difficulty listening in real-world environments.”
Collaborative Research
Work on mechanisms for difficult listening is a central theme for the research group, including members in Newcastle, UCL, and Iowa University, that has been supported by a Medical Research Council program grant.
Dr Will Sedley, from Newcastle University’s Faculty of Medical Sciences, said: “This memory system engaged in difficult listening is the most common site for the onset of Alzheimer’s disease.
“We propose that altered activity in the memory system caused by hearing loss and the Alzheimer’s disease process trigger each other. Researchers now need to examine this mechanism in models of the pathological process to test if this new theory is right.”
The experts developed the theory of this important link with hearing loss by bringing together findings from a variety of human studies and animal models. Future work will continue to look at this area.
https://honiton-hearing.co.uk/wp-content/uploads/2019/03/Honiton-hearing-aids-Devon-2019.jpg360640adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-09-30 18:23:362020-09-30 18:23:36Researchers Explain Link Between Hearing Loss & Dementia
BBC Looks at How Loud Music Can Lead to Early Signs of Hearing Damage
Those who frequently attend loud concerts and music events may be more likely to have earlier signs of hearing damage according to an article in BBC Science Focus Magazine.
The article examined a study from researchers at the University of Manchester, which suggests that although the damage observed is not enough to be diagnosed as a full-blown hearing loss, it could potentially have a cumulative effect on hearing later in life. Out of the 123 people tested, researchers found that those exposed to loud music had less functional hair cells.
To prevent this kind of damage, the researchers suggest avoidance of noisy situations, reduction of volume, or the use of hearing protection such as earplugs or earmuffs.
https://honiton-hearing.co.uk/wp-content/uploads/2020/09/Honiton-hearing-.jpg358500adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-09-21 09:26:342020-09-21 09:26:34Loud Music Can Lead to Early Signs of Hearing Damage
1 in 6 UK adults suffer hearing loss and, on average, they believe they should have their hearing tested every 2-3 years, yet most of us only have our hearing tested once a decade!
In a recent survey of 2,000 UK adults, commissioned by the British Irish Hearing Instrument Manufacturers Association (BIHIMA) and announced on its website, 16% of respondents self-reported suffering from hearing loss, with men being nearly twice as likely to suffer as women: 1 in 5 men reported suffering from hearing loss compared to 1 in 10 women.
These UK figures are significantly higher than the 1 in 9 Europeans with self-reported hearing loss, according to the latest Eurotrak report.
Eleven percent of 16-24 year olds surveyed say they too suffer hearing loss. This figure doubles to 22% in the over 55 age group. This revelation comes as no surprise as we are familiar with the concept that hearing can deteriorate with age.
Hearing loss compounds feelings of isolation and loneliness which can affect the lives of sufferers. As with loss of vision, identifying and treating hearing loss can improve an individual’s quality of life.
Nearly half of those who say they have a hearing loss claim to wear hearing instruments according to BIHIMA’s UK 2018 Eurotrak study, leaving over 50% not taking advantage of available technology. This problem could be managed with regular visits to an audiologist, according to BIHIMA.
BIHIMA Chairman, Paul Surridge comments on the survey’s findings: “Not everyone notices a decline in their hearing. It’s often a relative or family friend that raises the subject. We know hearing loss can have a detrimental effect on people’s mental health and the way they live their lives. As a society, we need to encourage everyone to have regular hearing tests and, when appropriate, be fitted with life changing hearing instruments to prevent unnecessary suffering.”
BIHIMA advises people to get their hearing tested every three years, and annually after the age of 55. Just as we visit the optician and dentist regularly, our hearing should also be valued and protected.
Research and Methodology:
The research was conducted by Censuswide across 2,000 UK adults. It was completed in February 2020, but publication was delayed due to the coronavirus outbreak. www.censuswide.com
Source: BIHIMA
https://honiton-hearing.co.uk/wp-content/uploads/2020/09/BIHIMA_LOGO_RGB_150dpi.jpg4031152adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-09-08 20:02:062020-09-21 09:27:341 in 6 UK adults suffer hearing loss
Transport noise is a major problem in Europe, with over 100 million people living in areas where road traffic noise exceeds levels greater than 55dB, the health-based threshold set by the EU. A new study by the University of Oxford and the University of Leicester has found a connection between traffic noise and obesity. Long-term exposure to road traffic noise, such as living near a motorway or on a busy road, was associated with an increase in body mass index and waist circumference, which are key markers of obesity, according to an announcement on Oxford’s website. The study was published in the journal Environmental Research.
“While modest, the data revealed an association between those living in high traffic-noise areas and obesity, at around a 2% increase in obesity prevalence for every 10dB of added noise,” said lead author Dr Samuel Yutong Cai, a senior epidemiologist at the University of Oxford. “The association persisted even when we accounted for a wide range of lifestyle factors, such as smoking, alcohol use, physical activity, and diet, as well as when taking into account socioeconomic status of both individuals and the overall area. Air pollution was also accounted for, especially those related to traffic.”
This is the “largest study to-date on noise and obesity,” looking at data on over 500,000 people from three European biobanks in the UK, Norway, and the Netherlands. Links between noise and weight were found in the UK and Norway, but not the Netherlands cohort. While the study is unable to confirm a causal relationship, the results echo those from a number of previous studies conducted in other European countries.
“It is well-known that unwanted noise can affect quality of life and disturb sleep,” said co-author Professor Anna Hansell, director of the University of Leicester’s Centre for Environmental Health and Sustainability. “Recent studies have raised concerns that it also may influence general health, with some studies suggesting links to heart attacks and diabetes. Road traffic noise may increase stress levels, which can result in putting on weight, especially around the waist.”
“On the individual level, sticking to a healthy lifestyle remains a top strategy to prevent obesity,” said Cai.“However, at the population level, these results could have some policy implications. Environmental policies that target reducing traffic noise exposure may help tackle many health problems, including obesity.”
Led by Hansell, work is ongoing to investigate other sources of noise in the UK, such as aircraft noise, and its effect on health outcomes. In the future, long-term follow-up studies would be valuable in providing more information on how the relationship between noise and weight functions.
“As we emerge and recover from COVID-19, we would encourage the government to look at policies that could manage traffic better and make our public spaces safer, cleaner, and quieter,” said Cai. “Air pollution is already a well-known health risk, but we now have increasing evidence that traffic noise is an equally important public health problem. The UK should take this opportunity to think about how we can, as a society, reorganize cities and communities to support our health and reap better health outcomes across the whole population.”
Original Paper: Cai Y, Zijlema WL, Sorgjerd EP, et al. Impact of road traffic noise on obesity measures: observational study of three European cohorts. Environmental Research. 2020;110013. DOI: https://doi.org/10.1016/j.envres.2020.110013
Source: Oxford University, Environmental Research
https://honiton-hearing.co.uk/wp-content/uploads/2020/08/Hearing-loss.jpg387580adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-08-25 09:31:262020-08-25 09:31:26Traffic Noise May Impact Weight Gain
University of Auckland to Study Chatbot Technology for Potential Tinnitus Therapy
Chatbot technology that offers therapy for tinnitus sufferers via a mobile device such as a smartphone will be trialed at the University of Auckland, according to an announcement on the school’s website.
Potential Tinnitus Therapy
Researchers are recruiting participants for the “Tinnibot” study which is aimed at helping those who suffer from a hearing disorder that affects around one in ten New Zealanders and more than 700 million people worldwide.
Tinnitus is usually experienced as a ringing in the ears but sufferers report a range of noises including buzzing, clicking, and even the sound of cicadas. Severity varies: sounds can be continuous or intermittent but the condition is linked to serious mental health effects including depression, anxiety, and insomnia. Currently there is no cure.
But as online technologies and devices such as smartphones change the way health care is delivered, Dr Fabrice Bardy from the University of Auckland’s School of Psychology says it has created new opportunities to treat tinnitus and to study which treatments work best.
Dr Fabrice Bardy
Tinnibot is a chatbot program which uses Cognitive Behavioural Therapy (CBT), proven to be effective in the treatment of tinnitus but usually only available through one-on-one sessions which can be expensive and involve long wait times.
The chatbot’s software interface delivers CBT designed for an individual’s needs directly to their mobile, conducting an automated and interactive text conversation designed to help people regulate their thoughts by focusing on positive thoughts and challenging negative ones.
The interface incorporates a sound therapy library which has proved to be effective tinnitus therapy, particularly for those who have trouble sleeping. It works by using noise at just the right volume to drown out the sounds tinnitus can produce.
Dr Bardy describes Tinnibot to be like having a tinnitus expert in your pocket.
“This chatbot interface is the first one designed specifically for the treatment of tinnitus, a tool that offers direct therapy and support which is convenient and affordable,” he said. “It will help people better understand their condition and to manage symptoms, give them a sense of being in control, and a confidence boost because that’s an important part of successful treatment.”
Participants in the research will be split into two cohorts with one using Tinnibot only and the other using Tinnibot as well as video counseling with a psychologist. The aim is to see which treatment is more effective.
Honiton hearing near Exeter
If you have been bothered with tinnitus for over three months and if you are interested in participating in the study, contact Dr Bardy for more information.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.