“`html

AWS Power Outage

On March 1, 2026, a significant power interruption in the AWS me-central-1 (Middle East) region was triggered by an unusual physical occurrence where external objects collided with a data center, igniting sparks and a fire.

This incident resulted in substantial disturbances to Amazon Elastic Compute Cloud (EC2) services, networking APIs, and resource availability within a single Availability Zone (mec1-az2).

As per AWS incident documentation, the fire brigade instructed a total power shutdown of the facility, including backup generators, while they addressed the situation. The subsequent power outage rendered EC2 Instances, Amazon Elastic Block Store (EBS) volumes, and Amazon Relational Database Service (RDS) databases in the impacted zone inoperative.

Timeline of the Incident

The disruption commenced around 4:30 AM PST, with AWS officially investigating the connectivity and power dilemmas by 4:51 AM PST. By 6:09 AM PST, AWS confirmed the localized power outage in mec1-az2.

The organization initiated traffic balancing strategies to redirect requests away from the compromised facility, redistributing loads to the unaffected Availability Zones within the area.

AWS engineers identified that the outage severely affected EC2 networking APIs. Customers reported widespread throttling errors and failures when executing essential networking functions, including AllocateAddress, AssociateAddress, DescribeRouteTable, and DescribeNetworkInterfaces.


google

Throughout the afternoon, AWS implemented multiple configuration modifications to alleviate the API failures. By 2:28 PM PST, the AllocateAddress API began exhibiting early signs of recovery.

Nevertheless, the AssociateAddress API remained more problematic, preventing customers from reassigning Elastic IP addresses from inactive resources to operational ones in healthy zones.

Mitigation and Partial Recovery

At 6:01 PM PST, AWS confirmed the successful restoration of the AssociateAddress API requests. The engineering team implemented a critical update that allowed customers to forcibly disassociate Elastic IP addresses from resources stranded in the powerless data center.

This mitigation enabled organizations to restore connectivity by linking their existing IP addresses with newly launched resources in unaffected Availability Zones.

Despite the advancements in reinstating API functionality, the underlying physical infrastructure remained offline. AWS indicated they were still waiting for clearance from local authorities to safely restore power to the compromised facility.

“We are still awaiting permission to turn the power back on, and once authorization is granted, we will ensure we restore power and connectivity safely,” an AWS representative stated in the 9:41 AM PST update.

The incident underscores the significance of multi-Availability Zone architectures. AWS emphasized that customers who were operating redundant applications across multiple zones were largely shielded from the outage.

For organizations needing immediate recovery of affected workloads, AWS recommends initiating replacement resources in unaffected zones or utilizing alternative AWS Regions, recovering data from their most recent EBS snapshots or backups.

Due to the surge of traffic redirected from the compromised zone, AWS noted that customers might face extended provisioning durations or require retries when launching specific instance types in the healthy ME-CENTRAL-1 zones.

As of the latest update at 6:01 PM PST, AWS had not provided an estimated timeframe for physical power restoration at the mec1-az2 facility. The company continues to encourage customers to operate from alternate Availability Zones or Regions wherever applicable while recovery actions are in progress.

“`