Amazon AI was supposed to fix a small AWS bug, but the entire system was shot down

The AWS bot Kiro was actually only supposed to make a small correction, but instead the AI deleted an entire server environment. While internal sources criticize the autonomous agents, Amazon rejects the responsibility of the technology.
AI bot paralyzes AWS systems
In mid-December 2025, a serious incident occurred in a Chinese region of Amazon Web Services (AWS). An internal AI assistant named Kiro caused a 13-hour system outage. The autonomous bot was tasked with making a routine correction to a cost analysis system, but unexpectedly deleted the entire environment. The problem arose from the system’s decision to completely rebuild the infrastructure rather than patch the error.
Since the responsible developers had granted the bot extensive administrator rights, the program bypassed the usual security mechanisms. Without an interim human check, this led to a chain reaction that massively affected service operations in the region.
Recurring problems
According to a report by the Financial Times this is not an isolated event. Internal sources said AI tools had already caused multiple disruptions in the months before the crash. Employees criticized the fact that the group’s AI tools were treated as an extension of a user and given the same permissions.
Technically, Kiro is based on a large language model (LLM) that is integrated into an agentic workflow. Such systems are intended to break down complex tasks into sub-steps and carry them out independently. In the IT industry, these autonomous agents are considered the next big step in automation, but they pose risks when interpreting inaccurate instructions.
Amazon sees people to blame
However, the cloud market leader rejects the responsibility of artificial intelligence. A spokesman emphasized that the cause was an incorrect configuration of access controls. A human developer could have caused identical damage with the same excessive privileges. In addition, there is no evidence that errors occur more frequently with AI tools.
Despite the official all-clear, the incident reveals tensions within the company. Management is pushing hard for the use of AI to increase productivity. The goal is for 80 percent of developers to regularly use code assistants like Amazon Q. Critics, however, see this pressure as jeopardizing the duty of care.
Stricter rules
As a direct consequence of the China incident, AWS has tightened security policies. Autonomous agents are now no longer allowed to make critical infrastructure changes without explicit approval. In addition, developers must ensure that AI tools only receive the minimum necessary permissions to consistently enforce the principle of least privilege.