Every day, over 1,000 injuries and nine fatalities are caused by distracted driving in the U.S. - and up to 6,000 fatal crashes each year may be caused by drowsy drivers. This indicates a major need for driver monitoring to help improve road safety. While AI in automotive has historically focused on what’s going on outside of the vehicle, OEMs and Tier 1 suppliers are beginning to turn cameras and sensors inwards, using AI to gather insight on what’s going on with the people inside the vehicle. And looking ahead to the evolution of autonomous capabilities, consumers will shift away from assessing a vehicle’s driving performance to focusing on the best in-cabin experience. Whether or not I’m behind the controls, which car can keep me the safest? The most productive? The most entertained? The most relaxed?
To improve road safety and to offer a stellar transportation experience, there must be a deep understanding of driver and occupant emotions, cognitive states, and reactions to the driving experience. Affectiva Automotive AI unobtrusively measures, in real time, complex and nuanced emotional and cognitive states from face and voice. This next generation in-cabin software enables OEMs and Tier 1s to monitor driver state and measure occupant mood and reactions.
“Affectiva’s new product moves the company into the autonomous vehicles sector, an industry receiving a lot of interest and investment. The decision could help the startup differentiate its business from competitors in emotion-sensing technology, which include EMOSpeech and Vokaturi.”
"The future of transportation is AI whether computers do the driving or not. We desperately need to change the way we approach the manufacture and operation of motor vehicles. Over one million people die every year in vehicle collisions. With Automotive AI, it’s possible the roads could become a little safer thanks to Affectiva and other companies working on changing things for the better."
“…with the camera pointed at the safety driver, Renovo can then tell whether that person is tired or distracted, and deliver the right prompts or warnings to ensure attention remains on the road ahead. And that’s where Renovo and Affectiva’s collaboration perhaps could have prevented the fatal Uber collision last month.”
Deep learning allows Affectiva to model more complex problems with higher accuracy than other machine learning techniques. It solves a variety of problems such as classification, segmentation and temporal modeling. It also allows for end-to-end learning of one or more complex tasks jointly including: face detection and tracking, speaker diarization, voice-activity detection, and emotion classification from face and voice.
Affectiva has already analyzed over 6.5 million faces in 87 countries and has used this data to build its core Emotion AI technology. We also collect large amounts of spontaneous driver and occupant data so we can tune our existing algorithms for automotive environments, and develop new automotive metrics. We are building this large automotive data corpus through in-car data collection, lab simulation environments and data partnerships.
Our deep learning models provide accurate, real time estimates of emotions and cognitive states on mobile devices and embedded systems. Our focus is on building highly efficient models that achieve high accuracy with a minimal footprint. Our approach involves joint-training across a variety of tasks using shared layers between models combined with iterative benchmarking / profiling of on-device performance, and model compression - training compact models from larger models.