Signia announced the launch of its Augmented Xperience (AX) hearing aid platform that “intelligently and automatically processes sound to better ensure that patients hear more clearly – regardless of the listening environment.”
Rather than simply amplifying all sounds, like most of today’s hearing aids, Augmented Xperience is said to “intelligently understand which sounds should be pulled to the foreground and prioritized, and which should remain in the background.”
The net result of this world’s first split-processing technology is “a fully-immersive and intelligent hearing experience. Sounds shift into the foreground and background naturally and seamlessly depending on the environment, creating an augmented hearing experience that’s better than normal hearing in certain situations.[1]”
“Hearing isn’t always easy. A group of people talking simultaneously, softly-spoken talkers in a bustling room, too much background noise – these are challenging environments regardless of a patient’s hearing ability,” said Dr Leanne Powers, director of professional education at Signia. “Augmented Xperience changes the game by understanding which sounds should be brought into focus and which remain in the background – creating an almost superhuman level of hearing that optimizes a patient’s performance through enhanced hearing in any situation.”
Signia’s all-new Pure Charge&Go AX hearing aids, the first to feature the AX platform, deliver up to 36 hours of run-time per charge and are directly compatible with Android and iOS devices, the company says.
The Augmented Xperience platform is rooted in a “world’s first Augmented Focus technology” that is said to “process speech and background noise separately to create a clear contrast between the two.” According to Signia, it then recombines them to deliver “outstanding speech clarity even in a fully immersive soundscape – like a crowded cafe or an open office environment.”
Augmented Focus leverages two independent processors – the first of which addresses ‘focus’ sounds like the speech of a conversation partner, while the second addresses ‘surrounding’ sounds like background music or ambient laughter, which create situational awareness and excitement. The two processors capture focus and surrounding sounds independently to create a greater contrast between the two – pulling focus sounds closer and placing surrounding sounds further away.
In addition to Augmented Focus, the AX platform features include:
Acoustic-Motion Sensors: Recognize one’s movements and adjust sounds accordingly to “ensure hearing in any situation is as precise and personalized as possible;”
Own Voice Processing (OVP): Processes the wearer’s voice separately from other sounds, “leading to higher user satisfaction with the sound of their own voice;[2]”
Signia Face Mask Mode: Helps deliver “better speech understanding through masks;”
The Signia app: Provides access to hearing aid controls, streaming capabilities, tinnitus therapy, the Signia Assistant for 24/7 digital support, Signia Telecare for remote care support, and much more.
“Signia has invested heavily in developing worlds-first technologies across motion sensing, voice processing, speech intelligibility, and now augmented hearing,” said Powers. “With the AX platform, and its Augmented Focus technology, Signia is continuing to demonstrate its commitment to our HCPs and their patients by providing them with solutions that level up their human performance through optimized hearing throughout one’s day.”
Signia Pure Charge&Go AX: Combining Modern Design and Ultimate Connectivity
Built on the AX platform, the Pure Charge&Go AX features a “sleek hearing aid design that is both comfortable and discreet.” As the company’s smallest rechargeable RIC hearing aid, Pure AX can make it “easier and more comfortable to wear with glasses and/or face masks.”
The Pure Charge&Go AX is also compatible with the Pure Charge&Go AX CROS transmitter for patients with single-sided deafness, and with an optional T-Coil, which enables the patient to pick up sound signals in public places like train stations, theaters, and museums.
Pure AX “boasts up to 36 hours of wear time on a single charge” and features convenient connectivity to ASHA-enabled Android phones and iPhones for effortless direct streaming. It is available in black, graphite, dark champagne, silver, pearl white, fine gold, deep brown, sandy brown, rose gold, and beige color options.
For more information on Signia Augmented Xperience, visit here. To learn more about Signia Pure Charge&Go AX, visit here.
Phonak, a global provider of hearing solutions, announced Naída Paradise, the power hearing aid that “gives people with severe-to- profound hearing loss the power, sound quality, and wireless connectivity they need to connect with everything around them.” Now in its seventh generation, Naída Paradise is said to be “14% smaller, 27% lighter1, and further improves upon the hearing performance that wearers expect from Phonak.” This includes “powerful sound, industry-leading connectivity, and soon a new custom program memory feature with the new myPhonak 5.0 app.”
Phonak Naida Paradise and Roger On
Naída Paradise features a powerful double receiver that delivers up to 141 dB of peak gain in the UP model and up to 130 dB in the rechargeable model, according to Phonak. It’s powered by the new PRISM sound processing chip and features AutoSense OS 4.0 for “a host of premium features that work together seamlessly.” For example, the hearing aids can “automatically enhance soft speech in quiet places or reduce noise in loud environments.” A built-in accelerometer detects movement and automatically steers the microphones to improve listening on-the-go.2
Phonak Naida Paradise
Naída Paradise helps eliminate connectivity barriers that previously existed for consumers who needed more power. With Phonak universal connectivity, wearers can wirelessly stream audio directly into both hearing aids from virtually any smartphone, TV, laptop, tablet, eBook, and more. Phonak Paradise technology helps allow two active Bluetooth connections at the same time, so wearers can stay connected to their smartphone and their video chat without having to manually switch back and forth.
In addition to universal Bluetooth connectivity, Naída Paradise hearing aids are also equipped with RogerDirect. This means wearers can also receive the Roger remote microphone signal with no additional accessory required. Launched in 2013, Roger™ technology is “proven to boost hearing performance in loud noise and over distance.” In fact, hearing aid wearers who receive the Roger signal have better speech understanding in noise and over distance than people with normal hearing.3 Some Roger microphones and receivers have also been shown to help users understand up to 61% more speech in a group conversation in 75dBA of noise than using hearing aids alone.4
Universal Bluetooth connectivity coupled with on-board microphones means Naída Paradise wearers can use their hearing aids as wireless headsets for hands-free calls. A new Tap Control2 feature allows users to double tap on their ear to accept or end a call, or pause or resume streaming. A tap on the other ear gives access to smartphone voice-assistants like Siri or Google Assistant.
“Naída has a long-lasting history of delivering power without sacrificing sound quality, so we knew that we needed to deliver an outstanding product to our wearers who depend so heavily on their devices,” said Jon Billings, Vice-President Phonak Marketing. “With Naída Paradise, we’re making history again by giving those with severe forms of hearing loss access to next-level, powerful sound with industry-leading connectivity.”
In late spring, the myPhonak app’s 5.0 update will include the myPhonak Memory feature. It helps allow consumers to save a custom program from the app to the hearing aids, access the last-used custom program using the hearing aid’s multi-function button, or access other custom programs via the app.
Phonak is also preparing for the newest member of the Roger family with the debut of Roger On. The new Roger On remote microphone will feature MultiBeam 2.0 technology and an “improved pointing mode that allows the user to zoom into a speaker by simply pointing.” Roger On will be compatible with most hearing aids and cochlear implants and will be able to stream a variety of audio content.
The new Phonak Naída Paradise is available for pre-order by licensed hearing care professionals in the US and other select markets and will begin shipping in late February. The myPhonak 5.0 app featuring myPhonak Memory feature as well as the new Roger On microphone will be introduced in the US and other select markets in late spring.
For US hearing care professionals to learn more and to pre-order: https://www.phonakpro.com/us/en/campaign/naida.html.
Source/Reference
1 Naída P UP with RogerDirect compared to Naída B UP + external Roger receiver.
2 In the Phonak power BTE portfolio, only Naída P-PR comes with motion sensor technology, including Tap Control.
Researcher Designs Vibrating Glove for Deaf Individuals
|
Artem Brazhnikov, a master student of the Faculty of Mechanical Engineering, Metallurgy, and Transport of Samara Polytech, a Russian technical university, attempted to help restore hearing function with the help of a vibrating glove he designed. A press release announcing the invention appears on the EurekAlert website.
Initially, Artem designed a joystick glove to be able to play computer games one-handed. He then improved the device, turning it into an unusual hearing aid. To make the joystick glove into the vibro-glove, he removed the finger-position sensors, provided the glove with tactile feedback modules (vibration motors), and converted the electronic control unit from a game controller into an audio signal spectrum analyzer.
“When a person loses his hearing, his other senses become more acute. The sensory substitution occurs: the brain compensates the lack of information from one sense organ at the expense of others,” said Artem. “A vibrating glove is a re-translator that converts sounds into tactile sensations.”
A glove microphone amplifies the audio signal and transmits it to a spectrum analyzer that splits the audio range into separate frequency bands. Each tactile module corresponds to one sound strip. The strength of a tactile stimulation is proportional to the amplitude of sound vibrations in the corresponding frequency band. This process is somewhat similar to playing a keyboard.
“For example, a piano has many keys, pressing which (tactile stimulation) generates a certain note, that is, a sound vibration of a certain frequency,” Artem explained. “Now imagine that there is an instrument that performs the opposite operation, that is, catches notes (sound vibrations) and converts them into keystrokes (tactile stimulation). A person playing such an instrument does not hear the sounds it makes, but feels how the piano itself presses the keys. So a vibrating glove is a piano, but only working vice versa.”
Source: EurekAlert!, Samara Polytech
Image:EurekAlert!, Samara Polytech
https://honiton-hearing.co.uk/wp-content/uploads/2021/03/hearing-shop-Devon.jpeg6401280adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2021-03-23 17:53:352021-03-23 17:53:35Researcher Designs Vibrating Glove for Deaf Individuals
Phonak, a global provider of hearing solutions, announced Naída Paradise, the power hearing aid that “gives people with severe-to- profound hearing loss the power, sound quality, and wireless connectivity they need to connect with everything around them.” Now in its seventh generation, Naída Paradise is said to be “14% smaller, 27% lighter1, and further improves upon the hearing performance that wearers expect from Phonak.” This includes “powerful sound, industry-leading connectivity, and soon a new custom program memory feature with the new myPhonak 5.0 app.”
Phonak Naida Paradise and Roger On
Naída Paradise features a powerful double receiver that delivers up to 141 dB of peak gain in the UP model and up to 130 dB in the rechargeable model, according to Phonak. It’s powered by the new PRISM sound processing chip and features AutoSense OS 4.0 for “a host of premium features that work together seamlessly.” For example, the hearing aids can “automatically enhance soft speech in quiet places or reduce noise in loud environments.” A built-in accelerometer detects movement and automatically steers the microphones to improve listening on-the-go.2
Phonak Naida Paradise
Naída Paradise helps eliminate connectivity barriers that previously existed for consumers who needed more power. With Phonak universal connectivity, wearers can wirelessly stream audio directly into both hearing aids from virtually any smartphone, TV, laptop, tablet, eBook, and more. Phonak Paradise technology helps allow two active Bluetooth connections at the same time, so wearers can stay connected to their smartphone and their video chat without having to manually switch back and forth.
In addition to universal Bluetooth connectivity, Naída Paradise hearing aids are also equipped with RogerDirect. This means wearers can also receive the Roger remote microphone signal with no additional accessory required. Launched in 2013, Roger™ technology is “proven to boost hearing performance in loud noise and over distance.” In fact, hearing aid wearers who receive the Roger signal have better speech understanding in noise and over distance than people with normal hearing.3 Some Roger microphones and receivers have also been shown to help users understand up to 61% more speech in a group conversation in 75dBA of noise than using hearing aids alone.4
Universal Bluetooth connectivity coupled with on-board microphones means Naída Paradise wearers can use their hearing aids as wireless headsets for hands-free calls. A new Tap Control2 feature allows users to double tap on their ear to accept or end a call, or pause or resume streaming. A tap on the other ear gives access to smartphone voice-assistants like Siri or Google Assistant.
“Naída has a long-lasting history of delivering power without sacrificing sound quality, so we knew that we needed to deliver an outstanding product to our wearers who depend so heavily on their devices,” said Jon Billings, Vice-President Phonak Marketing. “With Naída Paradise, we’re making history again by giving those with severe forms of hearing loss access to next-level, powerful sound with industry-leading connectivity.”
In late spring, the myPhonak app’s 5.0 update will include the myPhonak Memory feature. It helps allow consumers to save a custom program from the app to the hearing aids, access the last-used custom program using the hearing aid’s multi-function button, or access other custom programs via the app.
Phonak is also preparing for the newest member of the Roger family with the debut of Roger On. The new Roger On remote microphone will feature MultiBeam 2.0 technology and an “improved pointing mode that allows the user to zoom into a speaker by simply pointing.” Roger On will be compatible with most hearing aids and cochlear implants and will be able to stream a variety of audio content.
The new Phonak Naída Paradise is available for pre-order by licensed hearing care professionals in the US and other select markets and will begin shipping in late February. The myPhonak 5.0 app featuring myPhonak Memory feature as well as the new Roger On microphone will be introduced in the US and other select markets in late spring.
For US hearing care professionals to learn more and to pre-order: https://www.phonakpro.com/us/en/campaign/naida.html.
Source/Reference
1 Naída P UP with RogerDirect compared to Naída B UP + external Roger receiver.
2 In the Phonak power BTE portfolio, only Naída P-PR comes with motion sensor technology, including Tap Control.
Neurophysiologists at the University of Connecticut (UConn) have discovered a new drug that may prevent tinnitus and treat epilepsy by selectively affecting potassium channels in the brain. According to an article in the June 10, 2015 edition of The Journal of Neuroscience, Anastasios V. Tzingounis, PhD, and colleagues say that both tinnitus and epilepsy are caused by overly excitable cells that flood the brain with an overload of signals that can lead to seizures (epilepsy) or phantom ringing in the ears (tinnitus).
The authors report that roughly 65 million people worldwide are affected by epilepsy. While exact statistics on tinnitus are not easy to determine, the American Tinnutus Association estimates that two million people in the US suffer from disabling tinnitus.
Anastasios V. Tzingounis, PhD, University of Connecticut
According to Tzingounis and co-authors, the existing drugs available to treat epilepsy don’t always work and can have serious side effects. One of the more effective drugs, retigabine, helps open KCNQ potassium channels, which serve as the “brakes” that shut down the signaling of overly excited nerves. Retigabine, however, has terrible side effects and is usually only given to adults who don’t get relief from other epilepsy drugs. The side effects of retigabine include sleepiness, dizziness, problems with hearing and urination, and a disturbing tendency to turn patients’ skin and eyes blue.
In 2013, Tzingounis began collaborating with Thanos Tzounopoulos, PhD, a tinnitus expert at the University of Pittsburgh, to create a new drug candidate. The new drug, SF0034, was chemically identical to retigabine, but included an extra fluorine atom. Originally developed by SciFluor, the company wanted to know whether the compound had promise for treating epilepsy and tinnitus.
Thanos Tzounopoulos, PhD, University of Pittsburgh
Tzingounis and Tzounopoulos thought the drug had the potential to be much better than retigabine in treating both conditions. They first had to determine if SF0034 worked on KCNQ potassium channels the same way retigabine does, and if so, if it would be better or worse.
The co-authors explain in their article that KCNQ potassium channels are found in the initial segment of axons, long nerve fibers that reach out and almost touch other cells. The gap between the axon and the other cell is called a synapse. When the cell wants to signal to the axon, it floods the synapse with sodium ions to create an electrical potential. When that electrical potential goes on too long, or gets overactive, the KCNQ potassium channel kicks in. The result is that it opens, potassium ions flood out, and the sodium-induced electrical potential shuts down.
In some types of epilepsy, the KCNQ potassium channels have trouble opening and shutting down runaway electrical potentials in the nerve synapse. Retigabine helps them open. According to the authors, there are five different kinds of KCNQ potassium channels in the body, but only two are important in epilepsy and tinnitus: KCNQ2 and KCNQ3. The problem with retigabine is that it acts on other KCNQ potassium channels as well. That’s why it has so many unwanted side effects.
When testing SF0034 in neurons, the researchers found that it was more selective than retigabine. It appeared to open only KCNQ2 and KCNQ3 potassium channels, and to not affect the KCNQ 4 or 5 potassium channels. The research showed that SF0034 was more effective than retigabine at preventing seizures in animals, and it was also less toxic.
The results are promising, and SciFluor plans to start FDA trials with SF0034 to test its safety and efficacy in people. Treating epilepsy is the primary goal, but treating or preventing tinnitus is a secondary goal.
https://honiton-hearing.co.uk/wp-content/uploads/2019/03/Honiton-hearing-aids-Devon-2019.jpg360640adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2021-01-09 15:16:472021-07-04 14:58:35New Drug Promises Relief from Tinnitus
Zoom Charges Monthly Fee for Closed Captioning During Pandemic, ‘WBFO’ Reports
The challenges for hearing impaired people working remotely and utilizing video conferencing services during the coronavirus pandemic can make communication difficult. According to an article on the WBFO/NPR website, hearing advocate and Living With Hearing Loss founder Shari Eberts recently wrote an open letter—that turned into a petition with 58,000 signatures—asking video conferencing companies to remove the paywall from their captioning services.
According to the article, both Google and Microsoft have complied, but Zoom is still charging a $200 monthly fee for users to be able to access closed captioning.
Issues with video conferencing that include poor audio and/or sound quality as well as spotty internet connection, can make lip reading difficult. Even when using workarounds like speaker mode to be able to see a larger version of the person they’re speaking with and/or headphones to improve sound quality, a person’s lips can be out of sync with their words, Eberts says in the article. Closed captions could improve communication in these situations, she says.
“It’s hard for us to want to jump in or to share our thoughts because we’re not sure what’s been said. And obviously, there’s a lot of trepidation about looking silly or repeating something that someone just said,” Eberts is quoted in the article as saying.
To read the article in its entirety, please click here.
Source: WBFO
https://honiton-hearing.co.uk/wp-content/uploads/2020/06/Apple-earpods.jpg14001400adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-12-04 19:35:102020-12-04 19:35:10Zoom Charges Monthly Fee for Closed Captioning
Whisper may have a quiet name, but it could reverberate loudly in the hearing healthcare industry. The company launched its first new hearing aid on October 15—a product that really is significantly different from all others dispensed by audiologists and hearing aid specialists. And, yes, that’s right: the Whisper Hearing System is designed for dispensing by hearing care professionals. As such, Whisper represents the first new major hearing aid manufacturer with a product specifically designed for dispensing since the InSound Medical XT was approved by the FDA in 2003 (later purchased in 2010 by Sonova and renamed Lyric).
The Whisper RIC hearing aids and brain.
And a bit like Lyric, Whisper will use a subscription payment model for consumers. The leasing concept is gaining ground in hearing healthcare, in part due to the fact that technology moves so fast, hearing aids can be expensive, and frequent product upgrades are now a given in the industry. Whisper will be available via a comprehensive monthly plan that includes ongoing care from a local hearing care professional, a lease of the Whisper Hearing System, regular software upgrades, and a 3-year warranty that not only covers the system itself but also loss and damage. The company is offering a special introductory rate of $139/month (regularly $179/month) for a 3-year term.
The New Whisper Hearing System
The Whisper Hearing System essentially has three components:
A hearing aid processor that resembles an advanced receiver-in-the-canal (RIC) hearing aid;
The Whisper Brain is a small device that runs an AI-driven Sound Separation Engine to optimize sound in real time. It also enables connectivity to iPhones, and
A phone app that provides an interface for the consumer.
The Whisper team, which is largely composed of executives from the AI field, created the Whisper brain as a dedicated, powerful sound processing system that also allows for updates and other capabilities—instead of relying on the wearer’s smartphone for many of these functions. “We developed the Whisper Brain to run the core technology we’ve developed for hearing,” said company Co-founder and President Andrew Song in an interview with Hearing Review. “Think about your smartphone and all the processing inside it. We’re using the Whisper Brain to apply this type of processing to hearing without having to compete with smartphone games or applications. The Whisper Brain is a dedicated processor designed to provide the best hearing.”
However, the Whisper Brain isn’t required to use the hearing aid, as there may be situations where the wearer wants to step away from it or not take it with them. In those situations, the hearing aid uses the “onboard” hearing aid algorithms in the RIC (similar to other advanced hearing aids when unpaired to the user’s cell phone).
Wireless connectivity with iPhones is also provided through the Whisper Brain via Bluetooth, and the company says it may support other phones and has plans to expand on this in the future. The RICs use a size 675 battery with an expected use of 4-5 days with typical use including streaming, and the WhisperBrain has a USB port for recharging.
Not Your Grandfather’s Hearing Aid
Andrew Song
According to Song, Whisper started about 3 years ago in San Francisco when he began discussions with another Whisper co-founder, Dwight Crow, the company’s CEO. Song is the former head of products for an online instant-messaging (IM) system most of us are familiar with: Facebook Messenger Core. A mathematics and computer science graduate of the University of Waterloo, he is an expert in artificial intelligence and a member of Sequoia Capital’s Scout Program which was formed to discover and develop promising companies. Crow is the founder of Carsabi, a machine-learning based car sales aggregator acquired by Facebook in 2012, and he helped build the e-commerce segment at Facebook which yields over $1 billion per quarter in revenue. A third co-founder, Shlomo Zippel, was the applications team leader at PrimeSense which built the 3D sensor technology behind Microsoft Kinect.
Jim Kothe
The company then added as head of sales Jim Kothe, an audiologist and hearing industry veteran who has a wealth of experience within both the dispensing community and manufacturing, in addition to an extremely impressive team of executives with experience and leadership roles at companies like Facebook, Nest, Google, Invisalign, Johnson & Johnson, Solta Medical, and Cutera. Together they are collaborating on a product that blends artificial intelligence, hearing care, hardware, and software for helping solve the challenge of providing better hearing.
“I think for me, and probably for everyone at the company, it’s a very personal mission,” says Song. “Personally, the starting point is really my grandfather. He has hearing loss and is not an uncommon story when you work in this business: I’d say that he’s a hearing aid owner, but not a hearing aid wearer.”
This set into motion Song’s investigation into what hearing aid technology was doing, what experiences people were having with it, and why his grandfather had the complaints he did. “That really opened my world to all the exciting things that could be done, but also the opportunity we have for how we can really build a product to help [people like him],” says Song. “Since then we’ve been putting the product together and bringing the expertise that comes from hearing folks like Jim and the others on our team—and blending it with the kind of product and technology ideas we almost take for granted here in Silicon Valley. Products are becoming more consumer friendly, more consumer oriented, and we’re building some of those ideas into a new type of hearing aid product. So, while Whisper is a hearing aid regulated by the FDA, all of these things influenced our approach, our mentality, and our vision towards this space, and we think our approach is a little different [from those of other hearing aid manufacturers].”
The larger capacity for processing power is extremely exciting for Song and his colleagues, and he likens this advancement to the leap from analog to digital hearing technology.
The larger capacity for processing power is extremely exciting for Song and his colleagues, and he likens this advancement to the leap from analog to digital hearing technology. He says some great hearing aid algorithms have been, and will continue to be, created that will result in substantially improved hearing. However, there’s little point in having these algorithms if they can’t be fully employed in a wearable device.
He also says the problem in hearing aids is much more complex than, for example, those solutions found in noise-cancelling headphones. “Over time, [we’ve had] very ambitious people with a lot of ideas on what we should do with this powerful processing. What’s really exciting is not just having this technology, but also having a learning platform to be able to develop it. I think one of the most interesting parts of development is that the goal, at the end of the day, really isn’t about perfect noise removal. You need noise in your life. We have demos we can run that more or less perfectly remove noise…and it just creates sort of a weird environment. So, I think in many cases, the unique aspect of what we’re doing revolves around how do we use [the research] and how do we invent some truly novel ideas? Obviously, it’s not only about noise removal, but how we can use the powerful processing specifically in these hearing aids to make hearing aids really good for the purpose of listening. That subtlety is where we feel like we can really differentiate ourselves and truly make a difference in people’s lives.”
A System that Relies on Professional Care
Song says there has been a patient-centric approach at every turn in the design, development, marketing, and especially distribution of the Whisper Hearing System. And it starts with the hearing care professional’s expertise.
“I think there’s several very important things along that path; the first of which was to work with hearing care professionals who are the ‘artists’ in delivering great care,” Song told HR via a Zoom interview. “If I look at my grandfather’s experience, it was pretty obvious to me that having the right professionals made a huge difference. And so you can talk about using Zoom or you can talk about going direct to consumer, but it’s very, very obvious—even as a Silicon Valley engineer—that the audiologist is extremely important in the process. That’s why we made a decision very early on that we’d be working with professionals. And if you remember, when the company started in 2017, that’s when the OTC laws were getting passed. That’s where all the ‘cool stuff’ was supposed to be. Everyone was saying, ‘Get rid of these professionals!’ …But there’s a care-oriented mindset in hearing healthcare. You can see that there’s a personal aspect [needed] to evaluate what would be good for my grandfather. And when you talk to patients and you talk to audiologists, this becomes very clear. So, I think that was a very early decision that’s not necessarily about the product, per se, but about our business and how we best deliver the hearing system.”
One of the things Whisper also wants to address is the post-purchase feeling of regret that can accompany a high-end, high-technology purchase. As with any car, computer, or consumer electronics device, when a consumer purchases an expensive top-of-the-line hearing aid, there is doubtlessly a more advanced model with new processing capabilities and features that will be launched 6 months later. But, with hearing loss, Song believes that sense of regret can be magnified because hearing is such a personal, important 24/7 activity.
The Whisper Hearing Aid Brain
That led to the idea of a subscription-based system using a machine-learning platform that can be upgraded on regular intervals without continually replacing the actual hearing aid or brain itself. “The nature of our product is that it gets better over time. You don’t need to pay for [the upgrades]; the hearing aid learns on its own, and we’ll also deliver you a software upgrade every few months. [It’s] similar to how you might think of a cell phone plan…Fundamentally, that’s really what we’re trying to offer.”
It’s also important that professionals have the margins and revenues to be able to cover their expenses in order to provide exceptional hearing care, says Song. Whisper plans to provide upfront fees and work with professionals, while offering patients a better way to pay for the product, support, and systems that the company has developed. Currently, a select number of hearing care professionals are using the Whisper Hearing System, and the company is now expanding from this base of dispensing offices.
When asked how he thinks Whisper will change the hearing aid market, Song quickly replied, “I really hope that everybody around the world gets an upgradable hearing aid in the next 5 years. And, of course, I hope it’s ours. We have a lot to offer. But if the market moves toward Whisper in 5 years, then we’re competing with everybody to make the best upgrades. Frankly, I think that’s a big win for the industry. And it’s also a big win for my grandfather, right? I think, as part of that vision, we have to be really mindful about how much we bite off in any of our product development. So this first product represents a first step, especially on the device with this kind of learning capability and working with professionals on this payment model—all of the new things that we’ve already talked about. But there are other aspects around this kind of patient-centric, consumer-centric model with the professional and I think there’s a lot of interactivity that we can build on. There’s a lot of new ideas we have about how to better integrate everything together. And so, more and more, we’ll be able to build that out and address those issues because we’ll have an excellent learning hearing aid on the market.”
Funding for Whisper
The initial investment to establish the company came from Sequoia Capital and First Round Capital, and on Thursday (October 15) Whisper announced the close of a $35 million Series B funding round led by Quiet Capital for total funding of $53 million. Advisors for the company include Mike Vernal of Sequoia and former VP of engineering at Facebook; audiologist Robert Sweetow who is the former UCSF Director of Audiology; Lee Linden of Quiet Capital and founder of TapJoy and Karma; Rob Hayes of First Round which also invested in Uber and Square, and Stewart Bowers, former VP of engineering at Tesla who was responsible for AutoPilot.
“Software-defined hearing technology is the future,” said Vernal in a press statement. “By building the Whisper Hearing System around software, the Whisper team will be able to improve patient care with a device that adapts, upgrades, and improves continuously for the wearer’s benefit. This is the start of a new paradigm for delivering hearing technology, and we’re thrilled to partner with Whisper on this journey.”
“What I look for in a company is the team,” said Hayes. “The Whisper team combines incredible expertise in cutting edge artificial intelligence, software, and hardware with a genuine passion for helping people. I’m excited to work with them to transform the hearing space.”
Hearing Speech Requires Quiet—In More Ways than One
A very interesting paper by:
Kim Krieger, Research Writer, University of Connecticut
Perceiving speech requires quieting certain types of brain cells, report a team of researchers from UConn Health and University of Rochester in an upcoming issue of the Journal of Neurophysiology. Their research reveals a previously unknown population of brain cells, and opens up a new way of understanding how the brain hears, according to an article on theUConn Today website.
Your brain is never silent. Brain cells, known as neurons, constantly chatter. When a neuron gets excited, it fires up and chatters louder. Following the analogy further, a neuron at maximum excitement could be said to shout. When a friend says your name, your ears signal cells in the middle of the brain. Those cells are attuned to something called the amplitude modulation frequency. That’s the frequency at which the amplitude, or volume, of the sound changes over time.
Amplitude modulation is very important to human speech. It carries a lot of the meaning. If the amplitude modulation patterns are muffled, speech becomes much harder to understand. Researchers have known there are groups of neurons keenly attuned to specific frequency ranges of amplitude modulation; such a group of neurons might focus on sounds with amplitude modulation frequencies around 32 Hertz (Hz), or 64 Hz, or 128 Hz, or some other frequencies within the range of human hearing. But many previous studies of the brain had shown that populations of neurons exposed to specific amplitude modulated sounds would get excited in seemingly disorganised patterns. The responses could seem like a raucous jumble, not the organized and predictable patterns you would expect if the theory, of specific neurons attuned to specific amplitude modulation frequencies, was the whole story.
UConn Health neuroscientists Duck O. Kim and Shigeyuki Kuwada passionately wanted to figure out the real story. Kuwada had made many contributions to science’s understanding of binaural (two-eared) hearing, beginning in the 1970s. Binaural hearing is essential to how we localise where a sound is coming from. Kuwada (or Shig, as his colleagues called him) and Kim, both professors in the School of Medicine, began collaborating in 2005 on how neural processing of amplitude modulation influences the way we recognise speech. They had a lot of experience studying individual neurons in the brain, and, together with Laurel Carney at the University of Rochester, they came up with an ambitious plan: they would systematically probe how every single neuron in a specific part of the brain reacted to a certain sound when that sound was amplitude modulated, and when it was not. They studied isolated single-neuron responses of 105 neurons in the inferior colliculus (a part of the brainstem) and 30 neurons in the medial geniculate body (a part of the thalamus) of rabbits. The study took them two hours a day, every day, over a period of years to get the data they needed.
While they were writing up their results, Shig became ill with cancer. But still he persisted in the research. And after years of painstaking measurement, all three of the researchers were amazed at the results of their analysis: there was a hitherto unknown population of neurons that did the exact opposite of what the conventional wisdom predicted. Instead of getting excited when they heard certain amplitude modulated frequencies, they quieted down. The more the sound was amplitude modulated in a specific modulation frequency, the quieter they got.
It was particularly intriguing because the visual system of the brain has long been understood to operate in a similar way. One population of visual neurons (called the “ON” neurons) gets excited by certain visual stimuli while, at the same time, another population of neurons (called the “OFF” neurons) gets suppressed.
Last year, when Shig was dying, Kim made him a promise.
“In the final days of Shig, I indicated to him and his family that I will put my full effort toward having our joint research results published. I feel relieved now that it is accomplished,” Kim says. The new findings could be particularly helpful for people who have lost their ability to hear and understand spoken words. If they can be offered therapy with an implant that stimulates brain cells directly, it could try to match the natural behavior of the hearing brain.
“It should not excite every neuron; it should try to match how the brain responds to sounds, with some neurons excited and others suppressed,” Kim says.
The research was funding by the National Institutes of Health.
Original Paper: Kim DO, Carney LH, Kuwada S. Amplitude modulation transfer functions reveal opposing populations within both the inferior colliculus and medial geniculate body. Journal of Neurophysiology. 2020. DOI: https://doi.org/10.1152/jn.00279.2020.
Researchers Explain Link Between Hearing Loss & Dementia
Hearing loss has been shown to be linked to dementia in epidemiological studies and may be responsible for a tenth of the 47 million cases worldwide.
Now, published in the journal Neuron, a team at Newcastle University provide a new theory to explain how a disorder of the ear can lead to Alzheimer’s disease—a concept never looked at before. An article summarising the results of the research appears on the University’s website.
It is hoped that this new understanding may be a significant step towards advancing research into Alzheimer’s disease and how to prevent the illness for future generations.
Key Considerations
Newcastle experts considered three key aspects; a common underlying cause for hearing loss and dementia; lack of sound-related input leading to brain shrinking; and cognitive impairment resulting in people having to engage more brain resources to compensate for hearing loss, which then become unavailable for other tasks.
The team propose a new angle which focuses on the memory centers deep in the temporal lobe. Their recent work indicates that this part of the brain, typically associated with long-term memory for places and events, is also involved in short-term storage and manipulation of auditory information.
They consider explanations for how changes in brain activity due to hearing loss might directly promote the presence of abnormal proteins that cause Alzheimer’s disease, therefore triggering the disease.
Professor Tim Griffiths, from Newcastle University’s Faculty of Medical Sciences, said: “The challenge has been to explain how a disorder of the ear can lead to a degenerative problem in the brain.
“We suggest a new theory based on how we use what is generally considered to be the memory system in the brain when we have difficulty listening in real-world environments.”
Collaborative Research
Work on mechanisms for difficult listening is a central theme for the research group, including members in Newcastle, UCL, and Iowa University, that has been supported by a Medical Research Council program grant.
Dr Will Sedley, from Newcastle University’s Faculty of Medical Sciences, said: “This memory system engaged in difficult listening is the most common site for the onset of Alzheimer’s disease.
“We propose that altered activity in the memory system caused by hearing loss and the Alzheimer’s disease process trigger each other. Researchers now need to examine this mechanism in models of the pathological process to test if this new theory is right.”
The experts developed the theory of this important link with hearing loss by bringing together findings from a variety of human studies and animal models. Future work will continue to look at this area.
https://honiton-hearing.co.uk/wp-content/uploads/2019/03/Honiton-hearing-aids-Devon-2019.jpg360640adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-09-30 18:23:362020-09-30 18:23:36Researchers Explain Link Between Hearing Loss & Dementia
Phonak’s Audéo Paradise Launch Supports the Company’s Overall “Well Hearing is Well Being” Mission
Voltaire said “Wherever my travels may lead, paradise is where I am.” Phonak is hoping its newest hearing aid, Audéo Paradise, will evoke similar sentiments in people with hearing loss over a vast array of listening situations, and lend further support for its tenet that “Well hearing is well being.”
As the successor to its premium Audéo Marvel product line, Audéo Paradise has big shoes to fill. Marvel was introduced in October 2018 and sold over 1 million hearing aids within its first year—the fastest-ever sales for the company and probably Phonak’s most successful hearing aid since its 2005 launch of Phonak Savia. A Marvel 2.0 upgrade was released last August which, among several other things, made RogerDirect technology available to all Marvel hearing aids while expanding form factor options.
The new Phonak Audéo Paradise, officially released today (August 19), is designed to provide “the next level of excellent sound quality” through its new PRISM (Processing Real-time Intelligent Sound Management) sound processing chip that features approximately double the memory of Phonak’s previous chip, “universal” connectivity options, and a new fitting formula designed to provide better fits (particularly for milder losses), reduced reverberation, greater dynamic range, and reduced listening fatigue in noise. The company is also introducing a new version of AutoSense OS™ (ASOS 4.0), the fourth-generation of its successful operating system which augments the existing feature set found in Audéo Marvel with a new speech enhancer, dynamic noise cancellation, and motion-sensor hearing technology for even better performance in noisy environments.
Honiton hearing centre near Exeter
The integrated motion sensor not only detects when the wearer is moving and having a conversation, but it also supports hands-free conversations while connecting with Siri®, Google Assistant™, or Amazon Alexa® via a simple double-tap to the ear. The new hearing aid also features proven lithium-ion rechargeable battery technology that provides a full day of listening, including audio streaming, on a single charge and comes with an easy-to-use portable charging unit.
In July, Phonak held an online premiere of its new Paradise hearing aid for members of the press, and later even allowed participants to try the product for themselves during a remote fitting session (look for the upcoming blog about the editor’s experience with remote programming for a mild hearing loss).
Sound Quality and Innovative App Features
During the online media event, Phonak Product Manager Fabia Müller detailed three new features of the Audéo Paradise. These key features are designed to improve ease-of-use for the hearing aid wearer, while enhancing communication in a multitude of listening situations, particularly in quiet, in loud environments, and for special situations involving movement:
Speech Enhancer is designed for more intimate one-on-one conversations with a friend or loved one by enhancing the peak elements of speech (ie, providing more gain on the soft input speech signals).
Dynamic Noise Cancellation is a new feature that employs a directional beamformer when users are trying to understand speech in a loud environment, like in a restaurant, bar, or playground. The new system works in combination with Phonak’s adaptive beamformer, as well as the motion sensor. Müller says the entire system can provide up to 4 dB SNR improvement.
Motion Sensor Hearing detects if the user is moving or stationary, then seamlessly steers the microphone mode and the dynamic noise cancellation appropriately to maximize the speech signal and retain natural sound.
“With Paradise, we are delivering crisp natural sound, brilliant speech understanding, and personalized noise cancelling,” said Müller. She says ASOS 4.0 system uses artificial intelligence (AI) to orchestrate a fully automatic experience, blending the new features above into the existing Audéo features to ensure that the beamformer and dynamic noise cancellation are in the appropriate settings—whether one is walking or standing still—in a wide variety of environments.
The myPhonak app can control sound settings for Phonak Paradise users.
Paradise also introduces a suite of personalized digital solutions so hearing aid wearers get the most out of their new hearing aids. Through the myPhonak app, consumers can now easily adjust the level of background noise, and even receive a hearing test directly through hearing aids from a professional remotely, without leaving their home. The Phonak Hearing Screener has also been upgraded so that any person can quickly receive a hearing assessment online.
Audéo Paradise users can also receive help in special listening situations from the app’s Hearing Diary. Within the diary, there are four broad areas: “sound quality,” “speech understanding,” “hearing aid,” and “other.” Within each of these areas, one can choose to rate your satisfaction in various situations like “conversation in quiet,” “restaurant,” “watching TV,” “music,” “workplace,” etc, then provide more specific comments and feedback for assistance and/or possible adjustment.
Broadened Connectivity Options
With Paradise, a simple double-tap to the ear can hail your favourite voice assistant like Siri or Alexa.
With the new Tap Control, Paradise users can activate Siri or Alexa, answer or reject calls, or even pause or resume audio streaming by tapping on the outer ear (upper helix/pinna). In previous Phonak Audéo hearing aids, there were two Bluetooth connections, with only one being active at any one time; with Audéo Paradise, there are now eight possible Bluetooth connections, with two capable of being active via the customisable Tap Controls.
New First-fit Capabilities and Advanced Processing
Phonak has also adapted its proprietary fitting formula to these new capabilities by introducing Adaptive Phonak Digital 2.0 (APD 2.0), an update to the original fitting formula introduced 15 years ago. There are three main changes in the new APD 2.0:
Adaptive compression speeds for greater dynamic range and reduced perception of reverberation;
“Linearalized” gain for higher inputs like loud speech-in-noise situations or music (ie, a “louder input kneepoint”), and
A new pre-calculation of the gain settings and amplification schemes for mild-to-moderate hearing losses to provide better first-fit acceptance at the first appointment for this unique user group.
Müller noted that research at Hörzentrum Oldenburg GmbH showed APD 2.0 helped reduce listening effort particularly in noise. Additionally, OSOS 4.0 uses AI to orchestrate these new features, as well as previous Audéo performance benefits, to provide the best speech intelligibility and sound quality.
In summary, Audéo Paradise is the first hearing aid to benefit from Sonova’s new sound processing chip, PRISM, which delivers crisp, natural sound in any environment for excellent sound quality. In quiet situations, soft voices over distance are enhanced by the Speech Enhancer. With the Motion Sensor Hearing, the hearing aids can detect when the wearer is moving while having a conversation and automatically adjust the directional microphones to focus on the direction of speech. Paradise wearers also have more control over how they hear thanks to a new personalised noise cancelling feature in the myPhonak app.
“When creating our latest hearing solution, we turned to nature for inspiration,” said Martin Grieder, Group Vice President of Marketing for Sonova in a press statement. “Hearing is such an intricate part of our existence and fundamental for our overall well-being. Nature is also the source of so many sounds that can soothe, relax and comfort us. What better way to rediscover sound than with a hearing aid inspired by nature itself – Phonak Audéo Paradise.”
The Bigger Picture of Brain Health and the Future
During the online media event, Phonak Director of Global Audiology Angela Pelosi pointed out that hearing loss fundamentally changes our perceptions of well-being, safety, and security—one of the many reasons why hearing healthcare needs to change its messaging from just solving immediate hearing problems to a more universal message of “Well Hearing is Well Being.” Increasingly, scientific evidence shows that untreated hearing loss is associated with comorbidities like falls, loneliness and depression, increased use of healthcare systems, as well as cognitive impairment (eg, see recent Lancet Commission update that confirmed untreated hearing loss as the largest modifiable risk factor in dementia).
Julia Sarant, PhD, of the University of Melbourne presented information on a study indicating improved executive function for all participants who used hearing aids for 18+ months. The research also found that people with greater degrees of hearing loss are more likely to have poorer cognitive function, and that older adults who use hearing aids may be able stabilize their cognitive status or actually improve it significantly over time. In other words, “Looking after hearing health is also looking after brain health,” says Dr Sarant.
Paradise Models and Availability
Audéo Paradise is available beginning today via licensed hearing care professionals in the United States. It will be offered in all performance levels across four models, all Roger compatible, including the Audéo P-RT, a lithium-ion rechargeable model with telecoil.
For more details on Audéo Paradise, visit the Phonak website.
https://honiton-hearing.co.uk/wp-content/uploads/2020/08/Honiton-hearing-near-Exeter-.png630679adminhttps://honitonnew.leecurran.co.uk/wp-content/uploads/2018/03/honitonhearinglogo.pngadmin2020-08-31 09:18:312020-08-31 09:18:31Phonak Launches Audéo Paradise Hearing Aid
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.