Company Logo

Is there an DigitalOcean outage?

DigitalOcean status: Minor Outage

Last checked: 6 minutes ago

Get notified about any outages, downtime or incidents for DigitalOcean and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

DigitalOcean outages and incidents

Outage and incident data over the last 30 days for DigitalOcean.

There have been 8 outages or incidents for DigitalOcean in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for DigitalOcean

Outlogger tracks the status of these components for Xero:

API Active
Billing Active
Cloud Control Panel Active
Cloud Firewall Active
Community Active
DNS Active
Reserved IP Active
Support Center Performance Issues
WWW Active
Amsterdam Active
Bangalore Active
Frankfurt Active
Global Active
London Active
New York Active
San Francisco Active
Singapore Active
Sydney Active
Toronto Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Global Active
NYC2 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS3 Active
FRA1 Active
Global Active
NYC3 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Component Status
API Active
Billing Active
Cloud Control Panel Active
Cloud Firewall Active
Community Active
DNS Active
Reserved IP Active
Support Center Performance Issues
WWW Active
Active
Amsterdam Active
Bangalore Active
Frankfurt Active
Global Active
London Active
New York Active
San Francisco Active
Singapore Active
Sydney Active
Toronto Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
Global Active
NYC2 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS3 Active
FRA1 Active
Global Active
NYC3 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active

Latest DigitalOcean outages and incidents.

View the latest incidents for DigitalOcean and check for official updates:

Updates:

  • Time: Aug. 5, 2024, 5:48 a.m.
    Status: Resolved
    Update: Our Engineering team has confirmed the full resolution of the issue with the DigitalOcean App Platform and Container Registry in our NYC regions. Users should no longer experience any issues while pushing to Container Registries and working with App Platform builds. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
  • Time: Aug. 5, 2024, 5:21 a.m.
    Status: Monitoring
    Update: As of 03:08 UTC, our Engineering team has deployed a fix to resolve the issue with the DigitalOcean App Platform and Container Registry in our NYC regions. At this time, users should no longer experience issues while pushing to Container Registries and working with App Platform builds. We are currently monitoring the situation closely and will post an update as soon as the issue is fully resolved. Thank you for your patience and we apologize for any inconvenience.
  • Time: Aug. 5, 2024, 3:21 a.m.
    Status: Investigating
    Update: Our Engineering team is investigating an issue with the DigitalOcean App Platform and Container Registry in our NYC regions. At this time, users may experience delays or timeouts while pushing to Container Registries, as well as delays when working with App Platform builds. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: July 31, 2024, 1:29 a.m.
    Status: Resolved
    Update: From 23:47 UTC until 01:11 UTC, users may have experienced errors when attempting to create Spaces Access Keys in the Cloud Control Panel. Our Engineering team has identified and resolved the issue. The impact has been resolved and users should now be able to create Spaces Access Keys. We apologize for any inconvenience this may have caused. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.

Updates:

  • Time: July 29, 2024, 5:52 a.m.
    Status: Resolved
    Update: As of 05:05 UTC, our Engineering team has confirmed the full resolution of the issue impacting Snapshot and Backup Images in the TOR1 region. We have verified that the Snapshot and Backup events in the region are processing without any failures. Users should also be able to create Droplets from Snapshot and Backup images in this region without any issues. Thank you for your patience and understanding. If you should encounter any further issues at all, then please open a ticket with our Support team.
  • Time: July 29, 2024, 5:33 a.m.
    Status: Monitoring
    Update: As of 04:30 UTC, our Engineering team was able to take action to mitigate the impact of this issue. Snapshot and Backup events will be processed normally in the TOR1 region. Users may no longer experience errors when creating or restoring Snapshots and Backups images. Creating Droplets from snapshots and backup images in this region should also be processed without any issues. We apologize for any inconvenience caused and will post an update once the issue is fully resolved.
  • Time: July 29, 2024, 4:27 a.m.
    Status: Investigating
    Update: As of 03:14 UTC, our Engineering team is investigating an issue with Snapshot and Backup failures in our TOR1 region. Users may experience errors when creating or restoring Snapshots and Backups images. Creating droplets from Snapshots and Backup images in this region is also affected. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: July 26, 2024, 8:41 p.m.
    Status: Postmortem
    Update: ### **Incident Summary** On July 24, 2024, DigitalOcean experienced downtime from near-simultaneous crashes affecting multiple hypervisors \(ref: [https://docs.digitalocean.com/glossary/hypervisor/](https://docs.digitalocean.com/glossary/hypervisor/)\) in several regions. In total, fourteen hypervisors crashed, the majority of which were in the FRA1 and AMS3 regions, the remaining being in LON1, SGP1, and NYC1. A routine kernel fix to improve platform stability was being deployed to a subset of hypervisors across the fleet, and that kernel fix had an unexpected conflict with a separate automated maintenance routine, causing those hypervisors to experience kernel panics and become unresponsive. This led to an interruption in service for customer Droplets, and other Droplet-based services until the affected hypervisors were rebooted and restored to a functional state. ### **Incident Details** * **Root Cause**: A kernel fix being rolled out to some hypervisors through an incremental process conflicted with a periodic maintenance operation which was in progress on a subset of those hypervisors. * **Impact**: The affected hypervisors crashed, causing Droplets \(including other Droplet-based services\) running on these hypervisors to become unresponsive. Customers were unable to reach them via networking, process events like power off/on, or see monitoring.  * **Response**: After gathering diagnostic information and determining the root cause, we rebooted the affected hypervisors in order to safely restore service. Manual remediation was done on hypervisors that received the kernel fix to ensure it was applied while the maintenance operation was not in progress. ### **Timeline of Events \(UTC\)** July 24 22:55 - Rollout of the kernel fix begins.  July 24 23:10 - First hypervisor crash occurs and the Operations team begins investigating. July 24 23:55 - Rollout of the kernel fix ends.  July 25 00:14 - Internal incident response begins, following further crash alerts firing.  July 25 00:35 - Diagnostic tests are run on impacted hypervisors to gather information. July 25 00:47 - Kernel panic messages are observed on impacted hypervisors. Additional Engineering teams are paged for investigation. July 25 01:42 - Operations team begins coordinated effort to reboot all impacted hypervisors to restore customer services. July 25 01:50 - Root cause for the crashes is determined to be the conflict between the kernel fix and maintenance operation.  July 25 03:22 - Reboots of all impacted hypervisors complete, all services are restored to normal operation. ### **Remediation Actions** * The continued rollout of this specific kernel fix, as well as future rollouts of this type of fix, will not be done on hypervisors while the maintenance operation is in progress, to avoid any possible conflicts. * Further investigation will be conducted to understand how the kernel fix and the maintenance operation conflicted to cause a kernel crash to help avoid similar problems in the future.
  • Time: July 25, 2024, 5:48 a.m.
    Status: Resolved
    Update: As of 04:49 UTC our Engineering team has confirmed full resolution of the issue impacting network connectivity in multiple regions. Users should now be able to process events normally for Droplets and Droplet-based services like Load Balancers, Kubernetes or Database clusters, etc. If you continue to experience problems, please open a ticket with our support team from your Cloud Control Panel. Thank you for your patience and we apologize for any inconvenience.
  • Time: July 25, 2024, 4:55 a.m.
    Status: Monitoring
    Update: Our Engineering teams have applied a fix to all impacted hypervisors. Customers should no longer see networking issues with their Droplets and other Droplet-based resources. We continue to monitor the situation closely and will post another update once we're confident that the issue is fully resolved.
  • Time: July 25, 2024, 2:57 a.m.
    Status: Identified
    Update: Our Engineering teams have identified the cause of the issue with network connectivity in multiple regions and are actively working on a fix. We apologize for any inconvenience, and we will share more information as soon as it's available.
  • Time: July 25, 2024, 1:47 a.m.
    Status: Investigating
    Update: Multiple Engineering teams are continuing to investigate the cause of the issue impacting networking in multiple regions. At this time, we have identified that the issue impacts FRA1, AMS3, LON1, SGP1, and NYC1. We are actively remediating hypervisors in order to restore connectivity to customer resources, while working on determining root cause and a long-term remediation. Users who have resources on affected hypervisors (Droplets and Droplet-based services) may continue to experience loss of network connectivity, loss of monitoring graphs, errors when processing events like power off/on, and/or inability to access the recovery console for Droplets. We will post another update soon.
  • Time: July 25, 2024, 12:33 a.m.
    Status: Investigating
    Update: Our Engineering team is currently investigating an issue impacting networking in multiple regions, especially FRA1 and AMS3. Users may experience network connectivity loss to Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters, as well as encounter errors when trying to process events like power off/on, etc. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: July 23, 2024, 7:20 p.m.
    Status: Resolved
    Update: From 17:22 UTC to 17:27 UTC, we experienced an issue with requests to the Cloud Control Panel and API During that timeframe, users may have experienced an increase in 5xx errors for Cloud/API requests. The issue self-resolved quickly and our Engineering team is continuing to investigate root cause to ensure it does not occur again. Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket for further analysis.

Check the status of similar companies and alternatives to DigitalOcean

Hudl
Hudl

Systems Active

OutSystems
OutSystems

Systems Active

Postman
Postman

Systems Active

Mendix
Mendix

Systems Active

Bandwidth
Bandwidth

Issues Detected

DataRobot
DataRobot

Systems Active

Grafana Cloud
Grafana Cloud

Systems Active

SmartBear Software
SmartBear Software

Systems Active

Test IO
Test IO

Systems Active

Copado Solutions
Copado Solutions

Systems Active

CircleCI
CircleCI

Systems Active

LaunchDarkly
LaunchDarkly

Systems Active

Frequently Asked Questions - DigitalOcean

Is there a DigitalOcean outage?
The current status of DigitalOcean is: Minor Outage
Where can I find the official status page of DigitalOcean?
The official status page for DigitalOcean is here
How can I get notified if DigitalOcean is down or experiencing an outage?
To get notified of any status changes to DigitalOcean, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of DigitalOcean every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does DigitalOcean do?
Cloud hosting solutions designed for small and mid-sized businesses that are easy to use and can be expanded as needed.