VISUAL BEHAVIOR

We are working on the future of robotics

Visual Behavior provides scalable solutions for autonomous robots. We develop an Artificial Visual Cortex software emulating the human brain to provide robots with a high-level understanding of their surroundings. Visual Behavior’s products are meant to access unreachable use-cases for UAV, self-driving cars, AGV, cobot, indoor human assistance, and advanced human-robot interactions.

PRODUCTS

Our technology provides robots with all the insights required to reach full autonomy. To facilitate its integration, we designed products for various types of robotics perception, including AGV, ADAS & UAV.

Automated guided vehicle

AloforAGV is designed to make AGV autonomous in complex scenarios, in collaboration with the human workforce.  

Based on the Alopix & Alosym technology, we provide rich information about the scene geometry around the robot. Mobile robots can better perceive complex obstacles and reachable areas. Our product can be deployed on any existing set of mono or stereo cameras.

AGV
Advanced driver-assistance systems

AloforADAS is specialized in self-driving scenarios to helps humans for safety and better driving. 

Our product provides vehicles with the ability to detect and track objects & obstacles based solely on monocular or stereos cameras. Without costly lidar sensors, rich information about the scene geometry around the vehicle are accessible to enable autonomous application at low cost.

ADAS
Modules

How we work ?

Visual Behavior’s core technology is an Artificial Visual Cortex, an AI-powered software for scene comprehension. It is inspired by the architecture of the mammalian Visual Cortex. The technology is based on a new paradigm centered around a scene representation rather than on the sensors. It has an internal persistent symbolic representation of the world which is updated with every new external information.

Ground detection & gripping surface

Aloxpix provided pixel level spatial understanding of the world. Including depth perception, motion and semantic segmentation. Based on its persistent state, the system can leverage multiple cameras for multi-view comprehension, while being able to run in real time on embedded systems.

Understanding human behavior

As human reason about the world through concepts and entities, robots should too. Alosym fills the gap between the low level signal from multi-modal sensors inputs and high level symbolic representation. It enables high level symbolic reasoning for detection, tracking, predictions and complex interaction forecasting for robotics scenarios.

They're trusting us

Visual Behavior is organising its first Hackathon on 01-02-03 April 2022 in Lyon.

You want to join our team ?