logo aloception

The First Software Vision Sensor

Observation is key.

illustration aloception

We increase robots autonomy by providing them the visual common sense they need to navigate.

What we do

Our focus is on Observation.

Autonomous robotics is two things : Observation and Action.

Autonomy comes from Intelligence. Intelligence comes from Observation. Providing a high level of scene understanding to a mobile robot is the first step towards autonomy.

aloception-valuechain-position

What we offer

Essential information for navigation.

Relying on camera sensors, Aloception provides the generic information essential to any robot to navigate through a scene.

This software optimizes autonomous navigation by answering to three fundamental questions with three layers of information.

logo aloception
picto-navigablearea

Navigable areas

Where to navigate ?

picto-obstacles

Obstacle detection

What obstacles to avoid ?

picto-slam

SLAM

What relative position ?

Who we help

Same core technology for All.

Aloception tackles the autonomous mobility challenge by providing the same core technology to robotics manufacturers from different sectors.

image-amr

AMR

Industrial and service autonomous mobile robots.
Manufacturing industry, Logistics, Agriculture, Health

image-car

ADAS

Assistance systems and autonomous driving
Automotive, Airport logistics, Industrial port

image-boat

Boats

Assistance systems for marine transportation
Pleasure boats and small marine vehicles

image-drone

Drones

Aerial and underwater drones
Sensitive industries, Airport industry, Logistics

image-truck

Heavy vehicles

Public transports and heavy vehicles
City transports, trucks, construction machine

image-plus

More applications

The advantage of genericity is Adaptability.
Please contact us to discuss your usecase.

How we do it

Children walk
before they talk.

How can children development inspire new AI vision software for autonomy ?

Like a child who understands physical characteristics of things before knowing their names, Aloception focuses first on the depth and geometric understanding of the environment before the semantic aspect of things.

5

Camera-based only

Monocular, stereo, fisheye

5

Unified architecture

Automatic sensor calibration and fusion

5

Spatio-temporal reasoning

Time and geometric analysis for more robustness

5

Lightweight real-time understanding

Embedded system and recurrent formulation

image-aloception-cube

Humans navigate only with their two eyes.
The rest happens in the brain.
We only focus on the brain, not the extra eyes.

Who we are

Ambitious team for ambitious goals.

Founded in 2020 by Rémi Agier and Thibault Neveu, Visual Behavior now counts 15 creative and passionate people focused on solving observation problems for robotics.

Based on our extensive experience in cognitive science, computer vision and robotics engineering, we aim to help autonomous mobility companies by providing a scalable and affordable scene understanding solution.

Remi-profile-picture

Rémi AGIER

CEO & Co-founder

Thibault-profile-picture

Thibault NEVEU

CTO & Co-founder