Working Towards Autonomy

Jason and ChatGPT,•thoughts

Robots Are Still “Dumb”

As we approach the end of 2024, robotics startups are making headlines with projects in humanoid robots, robotaxis, and even autonomous flying vehicles. This surge in innovation is inspiring for me personally, but I can’t ignore the reality: robots today are still relatively “dumb.” Despite the hype, we’re far from reaching true AGI (Artificial General Intelligence).

In my view, robots remain limited in autonomy across most dimensions. At best, we’re at the early stage of just controlling movement—whether it’s wheels, legs, or drones—to tackle basic tasks like climbing stairs or navigating rough terrain. To be honest, there’s still no clear answer on what designs or form factors would allow robots to integrate into society seamlessly, or how they’ll best support us in daily life and work.

When I talk about “autonomy,” I’m referring to a robot’s ability to adapt quickly to new situations or respond to human commands when needed. The tech required for this level of autonomy, in my opinion, is still far off. Getting there will take significant advances in mechanics, energy systems, materials, and edge computing.

What’s Next?

I don’t think there’s a reason to be discouraged. We’re still climbing the robotics development curve, and while progress feels slow at times, I believe that more talent and investment will keep flowing into this field, pushing us forward.

Given the current state of robotics, I’ve started mapping out my own vision of the next decade:

Phase 0: Semi-Autonomy

This is where we are right now. In my experience, human involvement is still key to solving decision-making, especially for those tricky edge cases (out-of-distribution, or OOD scenarios). By combining human interaction with robot capabilities, we’re gathering valuable data to develop a more adaptable, “general” understanding for robots.

Phase 1: Centralized Autonomy

In Phase 1, I imagine robots reaching a level of autonomy where they can complete tasks without constant human input. I call this “centralized” autonomy because a single control unit—an ECU, or electronic control unit—will process all the data from sensors and actuators, allowing robots to make decisions on their own.

Phase 2: Distributed Autonomy

Phase 2, in my mind, is where things get interesting. Distributed autonomy breaks away from centralized systems, relying instead on multiple, decentralized sources of intelligence, similar to biological systems. Humans, for instance, use visual, vestibular, and proprioceptive systems together to maintain stability, with the brain stepping in only for more complex tasks.

I believe distributed, hierarchical autonomy might be the ultimate goal, but I doubt we’ll follow a straight path to get there. We’ll probably see these levels of autonomy mixed across applications.

Take robotaxis as an example. Right now, even the most advanced companies face high BOM (bill of materials) costs to achieve fully autonomous vehicles. Reducing these costs and refining the tech will take significant effort and resources. Only then might they have the capacity to pursue true distributed autonomy—or maybe by then, new systems will emerge that replace traditional approaches altogether.

Final words

In my view, true autonomy in robotics isn’t a straight line—it’s a mix of ambition, tech, and determination. Whether we achieve it through incremental progress or disruptive breakthroughs, one thing is clear to me: the race to make robots truly smart has only just begun.

CC BY-NC 4.0 2024 © Nextra.