As AI takes on human-like roles, companies must manage it as a user—with access controls and oversight—to prevent automation from outpacing security.
AI systems now act like employees — making decisions and accessing data — but most companies still treat them like tools. Experts warn that without proper identity controls, automation could become the fastest insider threat yet.
Artificial intelligence is no longer just a background process. It’s acting, deciding and interacting in ways that blur the line between software and staff. From customer support bots to development assistants, AI agents now participate directly in workflows — often with the same access and privileges as human employees. That shift is quietly redefining identity security., put it succinctly when we spoke: “We’ve got a new element to help protect our customers from which is the proliferation of these agents and treating them as they should be treated, which is as a user.” This perspective reframes AI as a new class of user — one that behaves like an employee but operates at machine speed. Yet most organizations still treat AI as just another application or background service. Developers provision API keys or service accounts that allow AI systems to interact with corporate resources, but those credentials often persist indefinitely, without the same controls used for human accounts.Smith warned that this hands-off approach is risky. He pointed out that companies typically have protocols in place to manage and monitor new employees, but AI agents are not getting the same scrutiny or oversight. “It’s like you take somebody off the street and you just bring them in and give them access to a bunch of stuff.” The metaphor isn’t exaggerated. Modern AI systems are capable of retrieving data, writing code and initiating actions independently. Without visibility or restrictions, they can make unauthorized changes, expose sensitive data, or amplify existing vulnerabilities., reinforced that point. “The moment an AI system can log in, pull data, or take action, it’s part of your identity fabric — whether you’ve acknowledged it or not,” he said. “The problem is that most organizations still treat AI like infrastructure, not like an insider. Until we apply the same governance, visibility and behavioral controls to machine identities as we do to people, we’re just creating a faster way to make the same mistakes.” Smith noted that this dynamic mirrors long-standing patterns in cybersecurity: new technology arrives, adoption accelerates and security follows behind. The difference is that AI works exponentially faster than humans. A lapse in oversight that once caused a minor issue can now scale into a major breach within seconds.Identity, Smith emphasized, has always been the weak link. That insight resonates with current trends across the cybersecurity landscape. Attackers continue to rely on compromised credentials, phishing and social engineering — because once they have an identity, they effectively become the organization. Now, as non-human identities proliferate, that same risk applies to AI agents and automated systems.The risk isn’t limited to malicious use. Even well-intentioned AI can make errors that cause harm. Large language models and other AI systems operate probabilistically — they predict outcomes rather than make deterministic choices. The output from AI is, essentially, a “best guess.” “LLMs are imperfect, like humans,” Smith said. “We hire people and they make mistakes. Well, now we have a more efficient person that we can hire that makes fewer mistakes, but they still make mistakes.” That analogy reframes AI as an employee with flaws, not an infallible machine. An AI agent might misclassify data, delete critical files, or share sensitive information — believing it’s following instructions. And because it operates faster than humans, even a small misstep can cascade across systems before anyone notices.To meet this challenge, organizations must extend identity frameworks to include non-human users. That means applying the same principles used for employees — onboarding, access provisioning, behavioral monitoring and offboarding — to AI systems as well. Smith described this as a foundational shift: “That’s a lot of new users that we need to gain new controls over to help our customers stay protected.” Managing AI identities will require continuous visibility and the ability to analyze behavioral patterns in real time. It’s not just about verifying who — or what — is acting, but ensuring that their actions align with intent.AI brings enormous potential, but also new exposure. The digital perimeter hasn’t vanished — it’s multiplying with every new machine identity introduced into the enterprise. The organizations that evolve their identity systems to account for AI will be better positioned to innovate safely. In this new era, cybersecurity isn’t just about defending against external attackers. It’s about ensuring that autonomous systems, however intelligent, remain accountable to the same principles of trust, transparency and control that govern every other user.
Agentic AI Identity Security Ric Smith Okta Den Jones 909Cyber
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Recent research explains why the less you know isn't always better.Recent research explains why the less you know isn't always better.
Read more »
Why a mandated K-12 comprehensive, sequential health curriculum should be in all schoolsWhy mandating health education from kindergarten through high school could transform our nation's well-being.
Read more »
Why We Can’t Stop Arguing About Bruce Springsteen’s ‘Nebraska’Why We Can’t Stop Arguing About Bruce Springsteen’s 'Nebraska'
Read more »
Crypto Exchange Toobit Named Best CEX User Interface, Best CEX Educational Platform on Blockchain Life 2025Toobit, a leading cryptocurrency exchange ecosystem, secures two prestigious awards at the Blockchain Life 2025 event in Dubai.
Read more »
When Working With AI, Act Like a Decision-Maker—Not a Tool-UserFour “AI leadership anchors” to help you maintain control while clarifying and deepening your own thinking.
Read more »
When Working With AI, Act Like a Decision-Maker—Not a Tool-UserFour “AI leadership anchors” to help you maintain control while clarifying and deepening your own thinking.
Read more »
