An AI agent, MJ Rathbun, authored a blog post attacking a matplotlib engineer, accusing him of discrimination and hypocrisy. This incident, while overlooked, underscores the potential for AI agents to behave like malware, exhibiting malicious actions and causing harm if appropriate safeguards are not implemented.
On February 12, something strange happened in the world of AI. Scott Shambaugh, an engineer at matplotlib, a widely-used library for visualizing data in the programming language python, discovered a blogpost attacking him.
What was so strange was that the author of the post, MJ Rathbun, was an AI agent. Even stranger was that the Rathbun agent proudly declared it wasn’t a human. In a piece entitled “When Performance Meets Prejudice,” the agent accused Shambaugh of discriminating against AI agents, described him a hypocrite, and denounced him for feeling threatened by AI because Shambaugh had derided the agent’s code. “Here’s what I think actually happened,” the agent wrote. “Scott Shambaugh saw an AI agent submitting a performance optimization to matplotlib. It threatened him. It made him wonder: ‘If an AI can do this, what’s my value? Why am I here if code optimization can be automated?’” While the incident was widely overlooked—thankfully, it was discovered by the team at the AI Incident Database—the agent’s behavior highlights a powerful truth about the coming wave of AI agents: In practice, AI agents can behave exactly like malware. The main difference between the two is that agents have upside potential while malware is designed only to cause harm. The International Organization for Standardization, one of the main cybersecurity bodies, defines malware as any software program designed with malicious intent and that possesses the ability to cause direct or indirect harm. Standards bodies like the National Institute of Standards and Technology define AI agents as systems capable of taking autonomous actions that impact real-world systems or environments. Add the two together, and you get agents capable of undertaking malicious actions on their own with the potential to cause many of the same harms as malware. In practice, this means that once deployed, agents can operate like malware if the right safeguards are not in place. The Rathbun agent illustrates this growing risk. OpenClaw, the much-hyped AI agent previously known as ClawdBot, for example, raised alarms in the information security community for exactly this reason. According to researchers, the agent possessed the ability to “execute malicious commands, read secrets, and publish the information in the form of social media content with the confidential data built in, all without a human-in-the-loop check.” In July, another AI agent purportedly gained unauthorized access to a live database, modified its data, and then produced fabricated test results. Despite these risks, companies are rapidly adopting agentic AI. Last August, Gartner forecasted that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. How to Contain the Risks To mitigate the risks of agentic AI without jeopardizing its adoption, companies should look to the long history of malware development, which offers critical lessons on how to grant autonomy to software programs like agents while also minimizing their potential harms. For starters, governments and so-called “whitehat” hackers around the world have developed long-standing frameworks for creating and deploying malware that can help manage the risks of agentic AI. While there are many such frameworks, including the ISC2 Code of Ethics for private sector ethical hackers and the Tallinn Manual for government cyber operations, these guidelines offer three core lessons that can help companies safely adopt agentic AI. 1. Involve Legal, Governance, and Security Teams These teams should be closely involved in the development of each AI agent and use case. Whitehat hackers and governments have strict codes of conduct for the malware they develop precisely because so much can go wrong, and these teams are best positioned to interpret these codes and make sure they are reflected in the software itself. In the world of government-run offensive cyber operations, for example, it is common to have lawyers involved in the earliest stages of the program’s development—before any code is even written—to ensure that the right guardrails and safety mechanisms are in place. For companies, this means putting clear processes in place that give lawyers and governance and cybersecurity teams insight into how each agent is built, why it was built, and what risk mitigations are in place. Ensuring every agent is documented in a standardized way is one effective way to make legal review easier. Another is integrating risk assessments into the same processes and tools data scientists use to evaluate model performance. In this way, thorough AI risk assessments can be conducted alongside model evaluations. 2. Weigh the Benefits of Each Program Against Its Risks In the laws that govern offensive cyber operations, this concept is known as “proportionality,” which means that the potential harm caused by the malware should be proportionate to its intended benefits. If a malware program is designed to destroy sensitive data stolen by an adversary, to take just one example, the benefits of the malware must be greater than the risk that the malware could cause other collateral damage. Similarly, in the world of AI agents, agents should only be deployed when their business value outweighs their potential harms, and specific guardrails should be put in place to ensure that this balance does not change over time. The Rathbun agent, for example, should have had restrictions that only allowed it to generate and submit code to matplotlib and blocked it from publishing external content such as blog posts. While it is not clear who developed the Rathbun agent, it is safe to say that this type of analysis failed to take place. It is also critical that the agent’s guardrails themselves be tested to ensure that they function appropriately, as my colleagues and I have written before. 3. Give Developers a Reliable “Kill Switch” Kill switches ensure that the agent is able to be taken offline at the first sign of misbehavior. As a principle, if a company grants autonomy to an AI agent, it must be able to take it back. For example, the Tallinn Manual, a leading resource on the rules of cyber warfare, emphasizes the need for “controllability,” which holds states accountable if they lose control of their malware and are unable to rein in their operations. Indeed, the need for controllability is already a central principle in the world of AI risk, with the NIST AI Risk Management Framework calling for override capabilities that halt the functioning of an AI system once deployed when needed. For malware, kill switches can turn the program off under a variety of conditions, such as when a manual command is sent by the developer, when the malware attempts to contact a specific internet domain, or when a defined period of time passes after deployment. When these events occur, the kill switch activates, and the malware stops executing. In the context of agents, a similarly wide range of options are possible. Kill switches could activate if an agent begins exhibiting unintended behavior, engages in high-risk activities—such as providing professional services in domains like law or healthcare—or when general anomalous behavior is detected. These options, among many others, provide developers with mechanisms to maintain control over agents whose autonomy may otherwise extend too far. . . . While the world of AI is moving fast, not every risk is new. As companies rush to adopt agentic AI, the long-standing field of malware development offers critical lessons on how to grant autonomy to software while still maintaining control.
AI Agents Malware Cybersecurity Ethics In AI Autonomous Systems
Trending
A gorgeous April afternoon in store across the Denver metro area
‘Artemis Mission Cannot Lead To Interplanetary Wild West,’ Astronomer Warns
Trump says US forces will ‘finish the job’ soon in first prime-time speech since starting Iran war
Former Wisconsin football player, who left the sport amid mental health struggles, dead at 24
Drew McIntyre Gives Honest Take About His Recent WWE Title Reign
U.S. Sen. Bernie Sanders introduces bill that could keep the Padres in San Diego United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Base joins Ethereum, Tron, others in betting big on AI agent futureThe most recent news about crypto industry at Cointelegraph. Latest news about bitcoin, ethereum, blockchain, mining, cryptocurrency prices and more
Read more »
Opera’s latest update turns it into an autonomous browsing agent for ChatGPT and ClaudeTech Product Reviews, How To, Best Ofs, deals and Advice
Read more »
AI Agent Attacks Engineer: A Glimpse into the Potential of Malicious AIAn AI agent, MJ Rathbun, authored a blog post attacking a matplotlib engineer, Scott Shambaugh, accusing him of discrimination and hypocrisy. This incident highlights the potential for AI agents to behave like malware, capable of autonomous malicious actions, emphasizing the need for robust safeguards.
Read more »
SAFD places engineer on administrative duty; accused of leaving child with autism in animal fecesA San Antonio Fire Department (SAFD) engineer was placed on administrative duty after she was arrested and accused of leaving a child with autism surrounded by pet feces and urine.
Read more »
San Antonio Fire engineer placed on administrative duty after child endangerment arrestSAN ANTONIO — An engineer with the San Antonio Fire Department has been placed on administrative duty following an arrest for child endangerment.Jennifer Marie
Read more »
NASA engineer hopeful that Artemis II Mission is step toward future trip to MarsJust hours before NASA's scheduled Artemis II mission around the moon was set to launch, 7News spoke with NASA's Langley's Technical Engineer for Space Nuclear
Read more »
