While AI in automotive has historically focused on what’s going on outside of the vehicle, OEMs and Tier 1 suppliers are beginning to turn cameras and sensors inwards, using AI to understand all things human within a vehicle. New regulations such as the European New Car Assessment Programme (Euro NCAP) are requiring next-generation safety features including advanced driver state monitoring, child presence detection and more. And looking ahead to the future of mobility, consumers will shift away from assessing a vehicle’s driving performance to focusing on the best in-cabin experience. Whether or not I’m behind the controls, which car can keep me the safest? The most productive? The most entertained? The most relaxed?
Affectiva Automotive AI powers in-cabin sensing systems that perceive all things human inside of a vehicle. OEMs, Tier 1 suppliers, ridesharing providers and fleet management companies are using Affectiva’s patented technology to understand drivers’ and passengers’ states and moods, in order to address critical safety concerns and deliver enhanced in-cabin experiences. Affectiva Automotive AI unobtrusively measures, in real time, complex and nuanced emotional and cognitive states from face and voice. The technology will scale with evolving safety standards and future of mobility needs, with the potential to perceive passengers, activities, interactions, and all facets of the human experience inside of a vehicle.
“Affectiva’s new product moves the company into the autonomous vehicles sector, an industry receiving a lot of interest and investment. The decision could help the startup differentiate its business from competitors in emotion-sensing technology, which include EMOSpeech and Vokaturi.”
"The future of transportation is AI whether computers do the driving or not. We desperately need to change the way we approach the manufacture and operation of motor vehicles. Over one million people die every year in vehicle collisions. With Automotive AI, it’s possible the roads could become a little safer thanks to Affectiva and other companies working on changing things for the better."
“…with the camera pointed at the safety driver, Renovo can then tell whether that person is tired or distracted, and deliver the right prompts or warnings to ensure attention remains on the road ahead. And that’s where Renovo and Affectiva’s collaboration perhaps could have prevented the fatal Uber collision last month.”
Deep learning allows Affectiva to model more complex problems with higher accuracy than other machine learning techniques. It solves a variety of problems such as classification, segmentation and temporal modeling. It also allows for end-to-end learning of one or more complex tasks jointly including: face detection and tracking, speaker diarization, voice-activity detection, and emotion classification from face and voice.
Affectiva has already analyzed over 7.5 million faces in 87 countries and has used this data to build its core Emotion AI technology. We also collect large amounts of spontaneous driver and occupant data so we can tune our existing algorithms for automotive environments, and develop new automotive metrics. We are building this large automotive data corpus through in-car data collection, lab simulation environments and data partnerships.
Our deep learning models provide accurate, real time estimates of emotions and cognitive states on mobile devices and embedded systems. Our focus is on building highly efficient models that achieve high accuracy with a minimal footprint. Our approach involves joint-training across a variety of tasks using shared layers between models combined with iterative benchmarking / profiling of on-device performance, and model compression - training compact models from larger models.