Last checked: 5 minutes ago
Get notified about any outages, downtime or incidents for Wasabi Technologies and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Wasabi Technologies.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
AP-Northeast-1 (Tokyo) | Active |
AP-Northeast-2 (Osaka) | Active |
AP-Southeast-1 (Singapore) | Active |
AP-Southeast-2 (Sydney) | Active |
CA-Central-1 (Toronto) | Active |
EU-Central-1 (Amsterdam) | Active |
EU-Central-2 (Frankfurt) | Active |
EU-South-1 (Milan) | Active |
EU-West-1 (London) | Active |
EU-West-2 (Paris) | Active |
US-Central-1 (Texas) | Active |
US-East-1 (N. Virginia) | Active |
US-East-2 (N. Virginia) | Active |
US-West-1 (Oregon) | Active |
Wasabi Account Control API | Active |
Wasabi Account Control Manager Console | Active |
Wasabi Management Console | Active |
View the latest incidents for Wasabi Technologies and check for official updates:
Description: From 2024-01-29 15:00 UTC to 2024-02-01 06:00 UTC, we experienced an issue with our IAM and WAC API operations resulting in the possibility of a slow response time to client requests. The root cause of these slow API responses was caused by a high number of duplicate requests to our system which required multiple services to communicate and process these requests in the order in which they were received. This high rate of duplicate requests caused a backlog in processing in our billing subsystem which was unable to respond to the requests at the speed in which they were being sent to our system. Due to this bottleneck in the billing subsystem, all requests to IAM and WAC API were delayed until they could be processed in the order in which they were received. While the root cause of this issue began at 15:00 UTC on 2024-01-29, our system was able to keep up with the request rate until approximately 12:30 UTC on 2024-01-31 when we were notified of an increasing delay in IAM and WAC API requests. At 16:00 UTC on 2024-01-31, our team was able to identify the source of the requests and block the source of duplicate requests. At 17:00 UTC 2024-01-31, our Operations and Engineering Teams began the recovery process to complete all requests in the queue and streamline the acceptance of new requests to our systems. By 06:00 UTC 2024-02-01, the recovery process was completed, and all systems were fully operational allowing normal response times to our IAM and WAC APIs.
Status: Postmortem
Impact: Minor | Started At: Jan. 31, 2024, 4:50 p.m.
Description: From 2024-01-30 00:30 UTC to 2024-01-30 16:10 UTC, there was an issue experienced by our Reserved Capacity Storage \(RCS\) customers who had exceeded their purchased storage quota with the error ‘StorageQuotaExceeded’ when attempting to upload data to their Wasabi bucket\(s\). Any quota imposed on an RCA account is a soft limit and should not have affected the account’s ability to upload data to their bucket\(s\), however due to a bug in a recently deployed update, our billing subsystem at 00:30 UTC on 2024-01-30 subsequently flagged all RCS accounts that had exceeded their purchased capacity, and imposed a hard limit cap on the account, preventing any PUT API requests. At 15:20 UTC on 2024-01-30, our Billing team isolated the issue, and a fix was starting to be developed. At 15:50 UTC, the fix was deployed to our billing subsystem, and by 16:10 UTC all affected RCS accounts were back to fully operational status.
Status: Postmortem
Impact: None | Started At: Jan. 30, 2024, 3:58 p.m.
Description: On Sunday, January 21st 2024, at approximately 12:00 UTC a hardware failure in our US-EAST-1 datacenter resulted in two database nodes within the region to reconnect to our cluster while in a non-functional state. The introduction of these nodes to the database cluster within the region caused the database to become unhealthy, resulting in an inability for this cluster to service incoming requests from clients due to an exponential increase in unhealthy connections. By 16:30 UTC our Engineering Team had identified the cause of these unhealthy connections and by 19:00 UTC had prepared a fix to restore this fault. By 20:15 UTC the fix was fully deployed across the cluster, bringing the connection state back to healthy and available for incoming client requests.
Status: Postmortem
Impact: Minor | Started At: Jan. 21, 2024, 4:11 p.m.
Description: On 16 January 2024 at 10:38 UTC, an internal firewall configuration issue prevented client connection requests to the Wasabi Management Console, preventing users from logging into their Wasabi accounts. The internal connection failure prevented necessary Wasabi services from being able to successfully communicate across all regional subnets. Our Operations team isolated the problem, made changes to the firewall ACLs, and readvertised client connections. At 13:18 UTC, the configuration issue was resolved, and internal connections were able to successfully communicate.
Status: Postmortem
Impact: None | Started At: Jan. 16, 2024, noon
Description: On 16 January 2024 at 10:38 UTC, an internal firewall configuration issue prevented client connection requests to the Wasabi Account Control API, preventing users from logging into their Wasabi accounts. The internal connection failure prevented necessary Wasabi services from being able to successfully communicate across all regional subnets. Our Operations team isolated the problem, made changes to the firewall ACLs, and re-advertised client connections. At 13:18 UTC, the configuration issue was resolved, and internal connections were able to successfully communicate.
Status: Postmortem
Impact: None | Started At: Jan. 16, 2024, noon
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.