ADVANCED DRIVER-ASSISTANCE SYSTEMS

ADAS

At Visual Behavior, we aim to help humans for safety and better driving. We provide a scalable ADAS vision system based solely on a set of monocular and stereo cameras. AloforADAS gives the capabilities to integrate multi-sensory inputs from multiple points of view for a 360 scene comprehension of the world.

Request the demo

Detection of the ground, lines and navigation axes

3D detection of fixed/moving objects

3D people detection

Obstacle avoidance

Visual odometry

Real-time analysis

Modular system

AloforADAS is made out of our technology blocks, adapted for the ADAS use-case :

As human reason about the world through concepts and entities, robots should too. Alopix fills the gap between the low level signal from multi-modal sensors inputs and high level symbolic representation. It enables high level symbolic reasoning for detection, tracking, predictions and complex interaction forecasting for robotics scenarios.

Aloxpix provided pixel level spatial understanding of the world. Including depth perception, motion and semantic segmentation. Based on its persistent state, the system can leverage multiple cameras for multi-view comprehension, while being able to run in real time on embedded systems.

As humans reason about the world through concepts and entities, robots should too. Alosym fills the gap between the low level signal from multi-modal sensors inputs and high level symbolic representation. It enables high level symbolic reasoning for detection, tracking, predictions and complex interaction forecasting for robotics scenarios.

Visual Behavior is open to industrial partnerships to develop a demonstrator with your real world use-case.

Request the demo