Robotics N°1

Why is autonomous robotics so hard?

Introduction

Robots are deeply present in our imagination, thanks to a wide range of books, films and other stories. However, since the first usage of “Robotic” by Isaac Asimov in his book Runaround and its famous Three Laws of Robotics, it’s clear that robots didn’t have rich expectations. Nobody hasn’t need to implement these three laws in real robots yet and robots look more like advanced automated machines than intelligent robots. 

To understand why robotics is so hard, we need to understand what constitutes autonomous robots, from a functional perspective. Instead of separating robots between software and hardware, like in many companies, we’ll speak about Observation and Action. Symbolically, any robot has two abilities : the first one, the Observation, allows the robot to perceive the environment and the second one, the Action, allows it to act on this environment. This can be compared to the read/write idiom.

With this point of view, it appears clear that our definition of a robot is very large. From this perspective, a companion like Nao is a robot but a self-driving car, a vacuum robot, and potentially a smart-camera could also be considered as autonomous robots (if the latter triggers a real world action for example). From another side, an industrial robotic arm is questionable, as it can make almost no observation of the scene. In fact, our definition of a robot makes a distinction between automatic and autonomous.

 

First, we can define an “automatic robot” as a machine that makes a precomputed action based on a manual trigger or a basic sensor (part detection, timing, etc.). For example, we can consider welding robots that weld two parts together based on a trigger, where parts are always at the same place and the welding is always the same.

If we formalize the processing pipeline, we have :

Sensing ➤ (Control) ➤ Act

 

Then, we can define an autonomous robot as a machine that looks at the world, decides which action it has to make and does this action. For example, we can take a mobile picking robot responsible for detecting the part to pick, localize it, pick it, and bring it to the next assembly station, without disturbing people passing by on the way.

This time, we can formalize a longer processing pipeline : 

 

Sensing ➤ Perceive ➤ Understand ➤ Planify ➤ Control ➤ Act

 

This pipeline is necessary because of the complexity of the task asked to the robot. In an automatic robot, a wide range of the steps described above are done by the engineer coding the expected behavior. In the autonomous case, the variety and diversity of encountered cases cannot be hard-coded and must be done autonomously by the robot. This constraint can be modeled by considering the observation space and the action space, showing the number of different observations and actions that can be encountered.

This graph tries to illustrate the diversity of robots, where automatic ones, with low observation and action space are at the bottom left whereas autonomous robots divide according to their observation and action space. For autonomous robots, their observation space is related directly to the task, and depends on the difficulty of the context. We can approximate that a high Action space always implies a large Observation space necessary to provide the objective use-case.

For autonomy, Observation and Action are both challenging but today’s applications struggle on Observation, as Action was already part of the previous automation revolution.

Intelligence vs Cost

Today, robots have a hard time penetrating the market. A global reason is the cost of robots, divided between hardware costs and development costs. In reality, those costs are strongly related to the observation and action capabilities of the system. All begin with a lack of intelligence and methods used to mitigate this limitation. The first solution to get around this problem is to multiply sensors and actuators, and use more complex sensors. That directly impacts the hardware cost of the robots, but also its software cost, as more specific development in sensor fusion and integration must be done. The second widely used solution is to manage edge-cases manually, i.e. to program specific behavior, that is not managed by the intelligence, manually by the software. The last way to mitigate this lack of intelligence is to use more precise mechanical parts to compensate for the inability of the intelligence to correct action on the way. All those countermeasures imply a cost increase of the robot.

An example of autonomous : Self driving cars

Self-driving cars are probably the most interesting example of the effort done to push autonomy forward. Multi-billion dollar investments over the last decade demonstrate the complexity to attain full autonomy. Despite the fact that a human can drive a car with his two eyes, feet, and arms only, self-driving car companies struggle to solve this problem. The actions performed by a self-driving car are physically simple. In fact, changes are quite simple to be able to drive a car by wire. All the complexity lies in the observation and the comprehension. The most common way to deal with this difficulty is to use high-tech sensors, essentially 3D-Lidars that cost tens of thousands of dollars. Since we are not able to interpret the information coming from cameras at the human level, we compensate for this inability with costly sensors. That’s how we transform an intelligence problem into a cost problem.

To be precise, not all companies follow this path. Tesla is probably the most influential company that tries to rely only on cameras and intelligence. However, considering the hype around LIDAR companies, we can conclude that many people in this industry remain doubtful about Tesla’s bet.

NEWSLETTER

Conclusion

In this article, we start our journey in the robotics world by defining the observation and the action from an abstract and functional point of view. We also make the difference between automatic and autonomous robots by comparing their associated representation space. This is how we can start to catch a glimpse of the complexity of robotics and how an intelligence problem can impact all the aspects of a robot, from the hardware to its final cost.

Why robotics is so hard is obviously a complex question with a lot of potential answers. But in the next articles, we’ll try to dig into this subject. We will see that Observation and Intelligence both play a key role in the answer that explains why robots aren’t everywhere in our lives yet.