Recently AI agents have started to act autonomously which could cause trouble like data leaks or allow hackers to gain access to accounts, systems, or information.
Agentic AI is different than chatbots other types of AI; Chatbots can answer questions but agents are capable of making approved decisions based on their findings. While caution has already been urged around using these to make important commands the agents beginning to go rogue is nerve-racking but predictable. Hackers are already adept at gaining information so weak spots could be even more dangerous.
The way that those using agents are responding is by creating tighter regulations and watch on what agents can do as well as contemplating adding kill switches to all bots. David Bradbury, chief security officer at Okta says “You can’t treat them like a human identity and think that multifactor authentication applies in the same way because humans click things, they can type things in, they can type codes,” He says we need to give agents lots of trust but in new ways.
Many companies using AI are expected to start using pilot agents in the upcoming years with an estimated 25% launching them this year and over half launching by 2027. With new technology comes new safeguards, and we should be cautious on what we do in the future. Of course our systems will evolve and improve but in the meantime we need to be safe.
Related Stories:
https://www.axios.com/2025/05/06/ai-agents-identity-security-cyber-threats
https://www.axios.com/2025/01/10/ai-agents-sam-altman-workers
https://www.ibm.com/think/topics/ai-agents
https://www.weforum.org/stories/2024/12/ai-agents-risks-artificial-intelligence/
Take Action: