For every human employee in the average enterprise, there are now 82 machine identities - including AI agents, bots and automated systems - operating across the network. These “digital workers” never log off. They access data, trigger actions and make decisions at machine speed. Yet most organisations are still governing them as if they were just another IT tool.
According to Gartner, 40% of enterprise applications will feature AI agents by the end of this year, up from less than 5% just a year ago. But only 6% of organisations currently have advanced AI security strategies in place - meaning adoption is far outpacing governance.
For South African businesses already operating under regulatory and operational pressure, this creates an entirely new category of risk - one that most governance frameworks were never designed to handle.
“South African enterprises are racing to deploy AI, but many don't recognise that these agents effectively become employees with access to critical systems and sensitive data,” says Justin Lee, Regional Director for Palo Alto Networks in English-speaking Sub-Saharan Africa. “Yet most governance frameworks were built for human employees, not autonomous systems that never sleep.”
The new insider threat
Improperly secured AI agents represent a dangerous new form of insider threat. An always-on agent with privileged access, if compromised, can trigger cascading damage at machine speed.
“The most successful organisations are treating AI agents not as IT projects, but as digital employees requiring proper onboarding, access controls, and oversight,” notes Lee. “You wouldn't give a new hire unrestricted access to all your systems on day one. Autonomy must be earned, not assumed.”
Executive liability is emerging
As AI systems take autonomous actions with real-world consequences, the governance gap carries implications beyond operational risk. Direct personal liability for rogue AI agents is becoming a legal reality.
Accountability no longer stops with IT. Boards and executives are increasingly exposed to regulatory and reputational fallout when autonomous systems fail.
How to govern AI agents
Lee recommends three immediate actions:
1. Establish cross-functional AI governance councils that extend beyond IT to include security, risk, legal, and business stakeholders.
2. Begin AI programmes with measurable business goals and board-approved risk thresholds before deployment.
3. Apply zero trust principles to AI agents from day one, including least privilege access, comprehensive logging, and humans in the loop for high-impact actions.
“In three years, we won't be asking whether organisations adopted AI. We'll be asking which ones governed it well enough to survive,” he concludes.
This topic, along with the broader implications of AI for enterprise security, will be explored at Palo Alto Networks' Ignite on Tour event in Johannesburg on 26 February 2026.