37 Million People at Risk of ‘Distraction Danger’ When Wearing Headphones

Call for Hearable Technologies to Improve Safety

London, UK– 14 May 2019: New research shows that a staggering 37 million Americans feel they have put themselves in danger over the past 12 months when wearing headphones or earphones while walking, jogging or cycling. Examples included stepping out into a road, bumping into somebody or not hearing an emergency vehicle approaching.

Commissioned by Audio Analytic, a leading AI technology company focused on sound recognition, the research explored the risks that people across the US are exposed to everyday by being distracted from their surroundings while listening to music on the move.

The risk increases the younger consumers are. More than one in four young people between 18 and 34 have put themselves in harm’s way when wearing headphones, as 26% admitted to finding themselves in at least one hazardous situation over the past year, with many doing this multiple times.

Despite these admissions, the research found the majority of people claim to be aware of the dangers of wearing headphones and earphones in public. 96% of the population consider it dangerous to wear headphones or earphones when driving, while other activities deemed dangerous include cycling (91%), running (86%) and commuting on public transport (72%).

Dr Chris Mitchell, CEO and Founder of Audio Analytic comments; “A worrying number of people are putting their lives at risk every day when wearing headphones and shutting down the sense of hearing. Many of us wear headphones to block out the world and increase our focus, but that brings the risk of losing awareness of our surroundings. Missing important information in our environment can ultimately expose us to dangerous situations – and more needs to be done to prevent accidents from happening, we believe contextually-aware AI technology can be an enabler of this.”

Dr Richard Lichenstein, Professor of Pediatrics at the University of Maryland, added; “Our analysis of accident reports showed that a warning was sounded before the crash in 29% of accidents involving pedestrians wearing headphones. Headphone use has become common for a significant proportion of pedestrians – and headphones with noise cancelling features have become more popular. If noise cancelling headphones can now be designed to recognize warning sounds and actively alert the wearer to danger, or automatically alter sound transmission to increase awareness, then there is the potential to reduce injury risk amongst headphone wearers.”

The results highlight demand for headphones to make use of artificial intelligence, with 88% wanting their audio devices to recognize and alert them to the sound of emergency vehicle sirens. Other important sounds people want their headphones to recognize include; fire alarms (92%), important announcements starting e.g. train platform changes (83%) and gun shots (90%).

A further 88% of Americans agreed that dynamic noise cancellation, where headphones preserve battery life by automatically turning noise cancellation on when it is needed, would be useful in a range of locations such as the home, commuting and at the gym. In addition 58% would purchase hearables that had dynamic sound equalization, which enables the hearables to optimize the audio experience for different acoustic environments.

Dr Chris Mitchell continued; “Modern headphones with active noise cancellation can increase the risk of distraction danger, but these devices also offer a solution. They are fitted with external microphones presenting an opportunity to add intelligent sound recognition to ensure contextual awareness. When the earphones themselves can hear and recognize important sounds, like a siren, car horn or even a doorbell or somebody talking, the devices can alert the wearer or instantly change settings to allow more sound through to enhance awareness. In addition, by better understanding the world around us, wearables with sound recognition could also enhance sound quality and better manage battery power. Advanced AI tech can make the next headphones we buy intelligent enough to understand context. We can then lose ourselves in our music without losing touch with the world around us.”

Other facts from the report include:

  • 74% of US respondents own two or more pairs of headphones (in addition to those supplied with phones.)
  • 46% of people wear headphones for more than two hours a day
  • 54% are ‘excited by’ or ‘think it would be useful’ to have artificial intelligence on hearables. Only 12% would be worried about it or will avoid it
  • 81% of respondents are willing to sacrifice their battery life in return for more intelligent features
  • The industry has passed the wired vs wireless tipping point, with wireless devices now more popular among consumers, especially amongst those spending more than $101

Download the full Global 2019 Hearables Report; AI Attitudes and Expectations here audioanalytic.com/hearables2019.

*Ends*

Notes to editors
The survey was conducted among 6,012 consumers in the UK and USA (3,008 in the UK and 3,004 in the USA).

The research was conducted independently by Sapio Research on behalf of Audio Analytic at the end of March 2019. Respondent breakdown is representative across gender, geography and age. In this study, the chances are 95 in 100 that a survey result does not vary, plus or minus, by more than 1.3% from the result that would have been obtained if interviews had been conducted with all persons in the countries.

According to the United States Census in 2010 (https://www.census.gov/prod/cen2010/briefs/c2010br-03.pdf), there are 234,564,071 US residents over 18 years of age. Our survey found that 15.72% of respondents felt they had put themselves in a dangerous situation at least once in the last 12 months while wearing headphones, earphones or earbuds.

About Audio Analytic
Audio Analytic is the global leader in intelligent sound recognition, using advanced, edge-based AI to provide consumer technology with a wide sense of hearing.

At the heart of the company is a technology platform made up of two synergistic parts:

  • Alexandria™ is the world’s largest commercially-usable audio dataset for machine learning, featuring millions of audio files that are structured taxonomically with full data provenance.
  • AuditoryNET™ is a highly-optimized deep neural network for sound recognition, which models the ideophonic features of sounds.

In addition to core technology are nearly 50 experts in acoustics, data, machine learning and embedded software engineering.

The company has successfully licensed its technology to major global brands, including two of the world’s biggest companies plus Hive, Iliad, Sengled and others. The company has partners including Arm, Intel, Knowles, Ambarella, Ambiq, Vesper, Frontier and others.

Audio Analytic Ltd. is a privately held, VC-backed company, founded in 2010 and headquartered in Cambridge, UK with offices in San Francisco.

Contact
Marnie Spicer
Kaizo PR
+44 (0) 20 3176 4723
audioanalytic@kaizo.co.uk

Neil Cooper
Audio Analytic
neil.cooper@audioanalytic.com