
The First Software Vision Sensor
Observation is key.


We increase robots autonomy by providing them the visual common sense they need to navigate.
What we do
Our focus is on Observation.
Autonomous robotics is two things : Observation and Action.
Autonomy comes from Intelligence. Intelligence comes from Observation. Providing a high level of scene understanding to a mobile robot is the first step towards autonomy.

What we offer
Essential information for navigation.
Relying on camera sensors, Aloception provides the generic information essential to any robot to navigate through a scene.
This software optimizes autonomous navigation by answering to three fundamental questions with three layers of information.


Navigable areas
Where to navigate ?

Obstacle detection
What obstacles to avoid ?

SLAM
What relative position ?
Who we help
Same core technology for All.
Aloception tackles the autonomous mobility challenge by providing the same core technology to robotics manufacturers from different sectors.

AMR
Industrial and service autonomous mobile robots.
Manufacturing industry, Logistics, Agriculture, Health

ADAS
Assistance systems and autonomous driving
Automotive, Airport logistics, Industrial port

Boats
Assistance systems for marine transportation
Pleasure boats and small marine vehicles

Drones
Aerial and underwater drones
Sensitive industries, Airport industry, Logistics

Heavy vehicles
Public transports and heavy vehicles
City transports, trucks, construction machine

More applications
The advantage of genericity is Adaptability.
Please contact us to discuss your usecase.
How we do it
Children walk
before they talk.
How can children development inspire new AI vision software for autonomy ?
Like a child who understands physical characteristics of things before knowing their names, Aloception focuses first on the depth and geometric understanding of the environment before the semantic aspect of things.
Camera-based only
Monocular, stereo, fisheye
Unified architecture
Automatic sensor calibration and fusion
Spatio-temporal reasoning
Time and geometric analysis for more robustness
Lightweight real-time understanding
Embedded system and recurrent formulation

Humans navigate only with their two eyes.
The rest happens in the brain.
We only focus on the brain, not the extra eyes.
Who we are
Ambitious team for ambitious goals.
Founded in 2020 by Rémi Agier and Thibault Neveu, Visual Behavior now counts 15 creative and passionate people focused on solving observation problems for robotics.
Based on our extensive experience in cognitive science, computer vision and robotics engineering, we aim to help autonomous mobility companies by providing a scalable and affordable scene understanding solution.

Rémi AGIER
CEO & Co-founder

Thibault NEVEU
CTO & Co-founder