The OSO-AI start-up has recently completed a €4 million funding round. Its artificial intelligence solution that can detect incidents such as falls or cries for help has convinced investors, along with a number of nursing homes in which it has been installed. This technology was developed in part through the work of Claude Berrou, a researcher at IMT Atlantique, and the company’s co-founder and scientific advisor.
OSO-AI, a company incubated at IMT Atlantique, is the result of an encounter between Claude Berrou, a researcher at the engineering school, and Olivier Menut, an engineer at STMicroelectronics. Together, they started to develop artificial intelligence that can recognize specific sounds. After completing a €4 million funding round, the start-up now plans to fast-track the development of its product: ARI (French acronym for Smart Resident Assistant), a solution designed to alert staff in the event of an incident inside a resident’s room.
The device takes the form of an electronic unit equipped with high-precision microphones. ARI’s goal is to “listen” to the sound environment in which it is placed and send an alert whenever it picks up a worrying sound. Information is then transmitted via wi-fi and processed in the cloud.
“Normally, in nursing homes, there is only a single person on call at night,” says Claude Berrou. “They hear a cry for help at 2 am but don’t know which room it came from. So they have to go seek out the resident in distress, losing precious time before they can intervene – and waking up many residents in the process. With our system, the caregiver on duty receives a message such as, ‘Room 12, 1st floor, cry for help,’ directly on their mobile phone.” The technology therefore saves time that may be life-saving for an elderly person, and is less intrusive than a surveillance camera so it is better accepted. Especially since it is paused whenever someone else enters the room. Moreover, it helps relieve the workload and mental burden placed on the staff.
OSO-AI is inspired by how the brain works
But how can an information system hear and analyze sounds? The device developed by OSO-AI relies on machine learning, a branch of artificial intelligence, and artificial neural networks. In a certain way, this means that it tries to imitate how the brain works. “Any machine designed to reproduce basic properties of human intelligence must be based on two separate networks,” explains the IMT Atlantique researcher. “The first is sensory-based and innate: it allows living beings to react to external factors based on the five senses. The second is cognitive and varies depending on the individual: it supports long-term memory and leads to decision-making based on signals from diverse sources.”
How is this model applied to the ARI unit and the computers that receive the preprocessed signals? A first “sensory-based” layer is responsible for capturing the sounds, using microphones, and turning them into representative vectors. These are then compressed and sent to the second “cognitive” layer, which then analyzes the information, relying in particular on neural networks, in order to decide whether or not to issue an alert. It is by comparing new data to that already stored in its memory that the system is able to make a decision. For example, if a cognitively-impaired resident tends to call for help all the time, it must be able to decide not to warn the staff every time.
The challenges of the learning phase
Like any self-learning system, ARI must go through a crucial initial training phase to enable it to form an initial memory, which will subsequently be increased. This step raises two main problems.
First of all, it must be able to interpret the words pronounced by residents using a speech-to-text tool that turns a speaker’s words into written text. But ARI’s environment also presents certain challenges. “Elderly individuals may express themselves with a strong accent or in a soft voice, which makes their diction harder to understand,” says Claude Berrou. As such, the company has tailored its algorithms to these factors.
Second, what about other sounds that occur less frequently, such as a fall? In these cases, the analysis is even more complex. “That’s a major challenge for artificial intelligence and neural networks: weakly-supervised learning, meaning learning from a limited number of examples or too few to be labeled,” explains the IMT Atlantique researcher. “What is informative is that it’s rare. And that which is rare is not a good fit for current artificial intelligence since it needs a lot of data.” OSO-AI is also innovative in this area of weakly-supervised learning.
Data is precisely a competitive advantage on which OSO-AI intends to rely. As it is installed in a greater number of nursing homes, the technology acquires increasingly detailed knowledge of sound environments. And little by little, it builds a common base of sounds (falls, footsteps, doors etc.) which can be reused in many nursing homes.
Read more on I’MTech: In French nursing homes, the Covid-19 crisis has revealed the detrimental effects of austerity policies
From nursing homes to home care
As of now, the product has completed its proof-of-concept phase, and approximately 300 devices have been installed in seven nursing homes, while the product has started to be marketed. The recent funding round will help fast-track the company’s technological and business development by tripling its number of employees to reach a staff of thirty by the end of 2021.
The start-up is already planning to deploy its system to help elderly people remain in their homes, another important societal issue. Lastly, according to Claude Berrou, one of OSO-AI’s most promising future applications is to monitor well-being, in particular in nursing home residents. In addition to situations of distress, the technology could detect unusual signs in residents, such as a more pronounced cough. In light of the current situation, there is no doubt that such a function would be highly valued.