Last checked: 6 minutes ago
Get notified about any outages, downtime or incidents for DigitalOcean and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for DigitalOcean.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
API | Active |
Billing | Active |
Cloud Control Panel | Active |
Cloud Firewall | Active |
Community | Active |
DNS | Active |
Reserved IP | Active |
Support Center | Performance Issues |
WWW | Active |
App Platform | Active |
Amsterdam | Active |
Bangalore | Active |
Frankfurt | Active |
Global | Active |
London | Active |
New York | Active |
San Francisco | Active |
Singapore | Active |
Sydney | Active |
Toronto | Active |
Container Registry | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
NYC3 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
Droplets | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Event Processing | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Functions | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
GPU Droplets | Active |
Global | Active |
NYC2 | Active |
TOR1 | Active |
Kubernetes | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC3 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Load Balancers | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Managed Databases | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Monitoring | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Networking | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
Spaces | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
NYC3 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
Spaces CDN | Active |
AMS3 | Active |
FRA1 | Active |
Global | Active |
NYC3 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
Volumes | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
VPC | Active |
AMS2 | Active |
AMS3 | Active |
BLR1 | Active |
FRA1 | Active |
Global | Active |
LON1 | Active |
NYC1 | Active |
NYC2 | Active |
NYC3 | Active |
SFO1 | Active |
SFO2 | Active |
SFO3 | Active |
SGP1 | Active |
SYD1 | Active |
TOR1 | Active |
View the latest incidents for DigitalOcean and check for official updates:
Description: Our Engineering team has confirmed the full resolution of the issue with the DigitalOcean App Platform and Container Registry in our NYC regions. Users should no longer experience any issues while pushing to Container Registries and working with App Platform builds. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Status: Resolved
Impact: Minor | Started At: Aug. 5, 2024, 3:21 a.m.
Description: From 23:47 UTC until 01:11 UTC, users may have experienced errors when attempting to create Spaces Access Keys in the Cloud Control Panel. Our Engineering team has identified and resolved the issue. The impact has been resolved and users should now be able to create Spaces Access Keys. We apologize for any inconvenience this may have caused. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.
Status: Resolved
Impact: None | Started At: July 31, 2024, 1:29 a.m.
Description: As of 05:05 UTC, our Engineering team has confirmed the full resolution of the issue impacting Snapshot and Backup Images in the TOR1 region. We have verified that the Snapshot and Backup events in the region are processing without any failures. Users should also be able to create Droplets from Snapshot and Backup images in this region without any issues. Thank you for your patience and understanding. If you should encounter any further issues at all, then please open a ticket with our Support team.
Status: Resolved
Impact: Minor | Started At: July 29, 2024, 4:27 a.m.
Description: ### **Incident Summary** On July 24, 2024, DigitalOcean experienced downtime from near-simultaneous crashes affecting multiple hypervisors \(ref: [https://docs.digitalocean.com/glossary/hypervisor/](https://docs.digitalocean.com/glossary/hypervisor/)\) in several regions. In total, fourteen hypervisors crashed, the majority of which were in the FRA1 and AMS3 regions, the remaining being in LON1, SGP1, and NYC1. A routine kernel fix to improve platform stability was being deployed to a subset of hypervisors across the fleet, and that kernel fix had an unexpected conflict with a separate automated maintenance routine, causing those hypervisors to experience kernel panics and become unresponsive. This led to an interruption in service for customer Droplets, and other Droplet-based services until the affected hypervisors were rebooted and restored to a functional state. ### **Incident Details** * **Root Cause**: A kernel fix being rolled out to some hypervisors through an incremental process conflicted with a periodic maintenance operation which was in progress on a subset of those hypervisors. * **Impact**: The affected hypervisors crashed, causing Droplets \(including other Droplet-based services\) running on these hypervisors to become unresponsive. Customers were unable to reach them via networking, process events like power off/on, or see monitoring. * **Response**: After gathering diagnostic information and determining the root cause, we rebooted the affected hypervisors in order to safely restore service. Manual remediation was done on hypervisors that received the kernel fix to ensure it was applied while the maintenance operation was not in progress. ### **Timeline of Events \(UTC\)** July 24 22:55 - Rollout of the kernel fix begins. July 24 23:10 - First hypervisor crash occurs and the Operations team begins investigating. July 24 23:55 - Rollout of the kernel fix ends. July 25 00:14 - Internal incident response begins, following further crash alerts firing. July 25 00:35 - Diagnostic tests are run on impacted hypervisors to gather information. July 25 00:47 - Kernel panic messages are observed on impacted hypervisors. Additional Engineering teams are paged for investigation. July 25 01:42 - Operations team begins coordinated effort to reboot all impacted hypervisors to restore customer services. July 25 01:50 - Root cause for the crashes is determined to be the conflict between the kernel fix and maintenance operation. July 25 03:22 - Reboots of all impacted hypervisors complete, all services are restored to normal operation. ### **Remediation Actions** * The continued rollout of this specific kernel fix, as well as future rollouts of this type of fix, will not be done on hypervisors while the maintenance operation is in progress, to avoid any possible conflicts. * Further investigation will be conducted to understand how the kernel fix and the maintenance operation conflicted to cause a kernel crash to help avoid similar problems in the future.
Status: Postmortem
Impact: Minor | Started At: July 25, 2024, 12:33 a.m.
Description: From 17:22 UTC to 17:27 UTC, we experienced an issue with requests to the Cloud Control Panel and API During that timeframe, users may have experienced an increase in 5xx errors for Cloud/API requests. The issue self-resolved quickly and our Engineering team is continuing to investigate root cause to ensure it does not occur again. Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket for further analysis.
Status: Resolved
Impact: None | Started At: July 23, 2024, 7:20 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.