What businesses operating autonomous AI agents can learn from self-driving cars

As agentic AI accelerates, many companies are rushing to deploy autonomous agents, but could they be headed for a crash?

Ryan Falkenberg, CEO of CLEVVA, draws parallels between self-driving cars and AI agents to highlight the risks of unchecked autonomy.

Ryan Falkenberg, CEO of CLEVVA

 

A recent news story about a self-driving car holds some important lessons for the agentic age. The story involved a Tesla Model Y’s autonomous driving capabilities being fooled by a fake roadway painted on a polystyrene wall, cartoon style. To the casual observer, it seemed like a big setback for autonomous driving. But it actually demonstrated something far more important, which applies equally to autonomous AI agents.

The reason the Tesla Model Y was fooled by the fake landscape is that it relied on a combination of cameras and AI. And because it was operating autonomously, it was tricked into making a mistake that led to a crash. Increasingly autonomous vehicle operators, for example, Waymo, are trying to mitigate this risk by putting more stringent guardrails in place. They are opting to have redundancies built into critical systems, and vehicles are monitored by a real-time response team and only operate on well-defined routes. In other words, their vehicles are autonomous within defined boundaries.

If we don’t want autonomous AI agents causing the business equivalent of a car crash, we need similar kinds of guardrails in place. After all, most companies aren’t in a space where it’s practical for agents to have full autonomy. But what do those guardrails look like? And how can companies ensure that their AI agent providers have them in place?

First, let’s look at why you need guardrails. AI agents are not chatbots that merely provide answers to questions. They are capable of self-reasoning and learning, and can decide to take resulting actions. This autonomous power makes these agents very powerful, but it comes with real risk: what if they come to the wrong conclusion and take the wrong action?

Trust is critical. Companies, particularly those operating in regulated industries, need to trust that their AI agents won't go rogue and do or say the wrong thing. Customers also need to trust that AI agents will understand them correctly, adjust to their specific needs, offer them the right solutions, and then perform the right actions to get the job done. Otherwise, they will default to asking to speak to a human.

Any company selling agentic AI solutions must be able to provide a high level of control if they are to succeed. Companies need to feel they can prescribe the rules and pathways that can be travelled. The AI agent can then choose which pathway to take based on context, but it cannot leave the defined roads.

This is particularly important in contact centres where conversations are very rule-driven. Human agents have very little leeway on what they say and often follow scripts with QA teams checking that they did. If they go off script, they are given negative performance feedback, more training and coaching. Similarly, if an AI Agent says or does the wrong thing, it can cause digital, reputational, or even legal harm. To prevent this, strong guardrails are needed.

When talking to AI agent suppliers, ask them questions like: ‘How much control do you provide while it’s operating autonomously?’; ‘What type of guardrails are in place?’ ‘What type of reporting is available?’ and ‘How quickly will it adjust to changing rules?’

Putting guardrails in will limit the speed at which the AI agent can learn from the environment, but most companies can’t afford to simply set them loose and hope for the best. It's more important to get the AI agent replicating your formula than reinventing it.

As more companies implement AI agents, the lesson from the self-driving vehicle crash is clear: intelligence without oversight or guardrails can be a liability, not an asset. In much the same way that self-driving cars should not deviate from the road, AI agents should not deviate from process and rule-defined guardrails. Responsible autonomy means taking the time to define these boundaries, first test in low-risk environments, and then have fail-safe mechanisms in place in case something goes wrong. Those who do, will not only avoid costly mistakes but will find the benefits of high performing AI agents incredibly valuable.