Company Logo

Is there an DigitalOcean outage?

DigitalOcean status: Minor Outage

Last checked: 24 seconds ago

Get notified about any outages, downtime or incidents for DigitalOcean and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

DigitalOcean outages and incidents

Outage and incident data over the last 30 days for DigitalOcean.

There have been 8 outages or incidents for DigitalOcean in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for DigitalOcean

Outlogger tracks the status of these components for Xero:

API Active
Billing Active
Cloud Control Panel Active
Cloud Firewall Active
Community Active
DNS Active
Reserved IP Active
Support Center Performance Issues
WWW Active
Amsterdam Active
Bangalore Active
Frankfurt Active
Global Active
London Active
New York Active
San Francisco Active
Singapore Active
Sydney Active
Toronto Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Global Active
NYC2 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS3 Active
FRA1 Active
Global Active
NYC3 Active
SFO3 Active
SGP1 Active
SYD1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Component Status
API Active
Billing Active
Cloud Control Panel Active
Cloud Firewall Active
Community Active
DNS Active
Reserved IP Active
Support Center Performance Issues
WWW Active
Active
Amsterdam Active
Bangalore Active
Frankfurt Active
Global Active
London Active
New York Active
San Francisco Active
Singapore Active
Sydney Active
Toronto Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
Global Active
NYC2 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
NYC3 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS3 Active
FRA1 Active
Global Active
NYC3 Active
SFO3 Active
SGP1 Active
SYD1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active
Active
AMS2 Active
AMS3 Active
BLR1 Active
FRA1 Active
Global Active
LON1 Active
NYC1 Active
NYC2 Active
NYC3 Active
SFO1 Active
SFO2 Active
SFO3 Active
SGP1 Active
SYD1 Active
TOR1 Active

Latest DigitalOcean outages and incidents.

View the latest incidents for DigitalOcean and check for official updates:

Updates:

  • Time: Aug. 15, 2024, 11:48 a.m.
    Status: Resolved
    Update: As of 10:45 UTC, our engineering team has resolved the issue with networking in SFO2 and SFO3 regions, and networking in the regions should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
  • Time: Aug. 15, 2024, 11:21 a.m.
    Status: Monitoring
    Update: Our Engineering team has identified the cause of the issue with networking in SFO2 and SFO3 regions, and has implemented a fix. We are currently monitoring the situation and will post an update as soon as the issue is fully resolved.
  • Time: Aug. 15, 2024, 10:02 a.m.
    Status: Investigating
    Update: As of 08:00 UTC, our Engineering team is currently investigating an issue impacting networking in SFO2 and SFO3 regions. Users may experience network connectivity loss to Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: Aug. 15, 2024, 5:40 a.m.
    Status: Resolved
    Update: Our Engineering team has resolved the issue with Droplet creates and Snapshots. As of 05:30 UTC, users should be able to create Droplets, Snapshots and process events. Droplet backed services should also be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
  • Time: Aug. 15, 2024, 5:11 a.m.
    Status: Monitoring
    Update: Our Engineering team has identified the cause of the issue with Droplet creates and Snapshots, and has implemented a fix. We are currently monitoring the situation and will post an update as soon as the issue is fully resolved.
  • Time: Aug. 15, 2024, 4:47 a.m.
    Status: Investigating
    Update: Our Engineering team is continuing to investigate the issue with Droplet creates and Snapshots. A subset of users may also encounter errors with creating Droplet based products such as Managed Databases, Load Balancers, Managed Kubernetes in TOR1 region We will post another update once we have more information.
  • Time: Aug. 15, 2024, 4:29 a.m.
    Status: Investigating
    Update: As of 02:11 UTC, our Engineering team is investigating an issue with creation failure of Droplet and Snapshots in TOR1 region. Users may experience errors when rebuilding or restoring Droplets and Snapshots. Creating Droplets from Snapshots is also affected. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: Aug. 13, 2024, 3:10 a.m.
    Status: Resolved
    Update: From 23:08 August 12 to 01:26 August 13 UTC, customers may have experienced failures with Droplet creation, power on events, and restore events in the NYC3 region. Our Engineering team has confirmed resolution of this issue. Thank you for your patience. If you continue to experience any problems, please open a support ticket from within your account.
  • Time: Aug. 13, 2024, 2:25 a.m.
    Status: Monitoring
    Update: Our Engineering team has implemented mitigations to stop the Droplet creation failures in the NYC3 region. The team is monitoring the situation to ensure that everything is fully resolved. We apologize for the inconvenience and will share an update if there are any new developments.
  • Time: Aug. 13, 2024, 1:49 a.m.
    Status: Investigating
    Update: Multiple Engineering teams are continuing to investigate errors with creating Droplets, other Droplet-based products, and events such as power/restore in the NYC3 region. Users should no longer encounter errors when performing these operations, as mitigations are now in place, and teams have observed error rates returning to normal. We apologize for the inconvenience and will share another update once we have more information.
  • Time: Aug. 13, 2024, 1:01 a.m.
    Status: Investigating
    Update: Our Engineering team is continuing to investigate issues with creating Droplets and other Droplet-based products in the NYC3 region. At this time, customers may also encounter errors when powering on some existing Droplets, creating new Droplets, or restoring a backup or snapshot. We apologize for the inconvenience and will share an update once we have more information.
  • Time: Aug. 13, 2024, 12:27 a.m.
    Status: Investigating
    Update: Our Engineering team is investigating an issue with Droplet creation in NYC3. During this time, users will see an elevated failure rate for the creation of Droplets and other Droplet-based products, such as Load Balancers, Managed Databases, or Kubernetes, in NYC3.

Updates:

  • Time: Aug. 6, 2024, 8:40 p.m.
    Status: Resolved
    Update: From 09:52 UTC to 19:32 UTC, customers may have experienced failures with Droplet rebuild and restore events in all regions. Our Engineering team has confirmed full resolution of this issue. Thank you for your patience. If you continue to experience any problems, please open a support ticket from within your account.
  • Time: Aug. 6, 2024, 7:57 p.m.
    Status: Monitoring
    Update: Our Engineering team identified the root cause and performed a service rollback to resolve the issue that was causing failure in Droplet rebuild and restore events in all regions. We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved. Thank you for your patience and we apologize for the inconvenience.
  • Time: Aug. 6, 2024, 7:19 p.m.
    Status: Investigating
    Update: Our Engineering team is investigating an issue with processing of rebuild and restore events for Droplets, in all regions. During this time, users may experience long delays or errors when restoring or rebuilding a Droplet. We apologize for the inconvenience and will share an update once we have more information.

Updates:

  • Time: Aug. 8, 2024, 1:31 p.m.
    Status: Postmortem
    Update: ### **Incident Summary** On August 05, 2024 at 16:30 UTC, DigitalOcean experienced a disruption to internal service discovery. Customers experienced full disruption of creates, event processing, and management of other DigitalOcean products globally. Due to an error in a replication configuration that propagated globally, internal services were unable to correctly discover other services they depended on. This did not affect the availability of existing customer resources. ### **Incident Details** **Root Cause**: An incorrect replication configuration was deployed against the datastore which powers the internal service discovery service at DigitalOcean. The incorrect configuration specified a new datacenter with zero keys as 100% ownership of all keys in the datastore. This had an immediate global impact against the data storage layer and disrupted the quorum of datastore nodes across all regions. Clients of the service were unable to read/write to the datastore during this time, which had a cascading effect.‌ **Impact**: The first observable impact was a complete disruption to the I/O layer of the backing datastore. ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXdU0ECDjKRvdaKFCHqMyxYzFgFPwuYW_vbTLkcqmh4-YHquKP77AjozHffhv0xPnUNrYXtRV0rPwUYKEtb4n0Dozo_bvHFHbIY986AIirTvtuyDRh_7Z0TYjnUEjd9p1NWB83mr9SDE2Lhz6Fh6eRB_O-LY?key=ADgogSsNR7W3UpxAWFv39Q) ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeXhrWQUmlgO4kI7yfvDmzeVQNT0oCdDLBi53_P2TYzm8Zk0zZ0btTlKOz4zCPUvMH13byX4GTGUODvnCICI_UQGGNK5uN_YRSHE9RZ0YbQVCV4MZSKBRA1C-heSuzPN800iamOSr8XG0EYCnfK-6Y79mKE?key=ADgogSsNR7W3UpxAWFv39Q) These events are consumed by a wide variety of backing services that compose the DigitalOcean Cloud platform. This incident impacted: * Droplet Creates * Droplet Updates * Network Creates * Login / Authentication services * Block Storage Volumes Snapshot creation * Spaces/CDN Creates * Spaces Updates * Managed Kubernetes cluster creates * Managed Databases creates Other services across DigitalOcean, outside of the eventing flow, also rely on service discovery to talk to each other, so customers may have seen additional impact when attempting to manage assorted services through the Cloud Control Panel or via the API. **Response**: After gathering diagnostic information and determining the root cause, an updated / correct replication configuration was deployed. Some regions ingested the new replication configuration and started to recover. Teams identified additional regions that took longer to ingest the updated configuration and manually invoked the change directly on the nodes, and then ran local repairs on the data to ensure alignment before moving to the next region. Engineering teams cleaned up any remaining failed events and processed pending events that had not yet timed out. At the conclusion of that cleanup effort, the incident was declared resolved, and the cloud platform stabilized. ### **Timeline of Events \(UTC\)** Aug 05 16:30 - Rollout of the new datastore cluster begins. Aug 05 16:35 - First report of service discovery unavailability is raised internally. Aug 05 16:42 - Lack of quorum and datastore ownership is identified as the blocking issue. Aug 05 17:00 - The replication configuration change, adding the new datacenter, is identified as the root cause behind the ownership change. Aug 05 17:16 - The replication configuration change is reverted, and run against the region that had become the datastore owner. Some events start to fail faster at this point, changing the error from a distinct timeout to a failure to find endpoints. Aug 05 18:25 - Regions that have not detected or applied the reverted configuration are identified, and engineers start manually applying the configuration and running repairs on the datastore for those regions. Aug 05 19:10 - Remaining failure events resolve, and the platform stabilizes. ### **Remediation Actions** The replication configuration deployment happened outside of a normal maintenance window. Moving forward, these types of extension maintenances will be performed inside a declared maintenance window, with any potential for customer impact communicated via a maintenance notice posted on the status page. The process documentation for this type of deployment will be updated to reflect the current requirements and clearly outline the steps and expectations for each stage of a new deployment. Additionally, the manual processes that occurred will be automated to help reduce the potential for human error. Multiple teams are also evaluating if our current topology of the internal datastore is appropriate, and if there are any regionalizations or multi-layered approaches DigitalOcean can take to help ensure our internal service discovery remains as available as possible.
  • Time: Aug. 5, 2024, 7:46 p.m.
    Status: Resolved
    Update: Our engineering team has resolved the issue impacting the management of multiple services via the Cloud Control Panel and the API, including event processing. All services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
  • Time: Aug. 5, 2024, 7:20 p.m.
    Status: Monitoring
    Update: Our Engineering teams have identified a remediation strategy and are working to roll it out to all regions. We are starting to see services recover as a result of this fix being deployed. Customers may begin to see some management functionality for services return and we will continue to provide updates as we work to return full functionality.
  • Time: Aug. 5, 2024, 5:39 p.m.
    Status: Identified
    Update: Our Engineering teams have now identified the cause of the issue impacting multiple services. They continue to work on resolving the impact to customers. During this time, users may experience errors when attempting to manage services through the UI or the API, including accessing Droplet information or the processing of Droplet events. Existing customer resources, such as Droplets, remain online and accessible. We apologize for the inconvenience and will share an update once we have more information.
  • Time: Aug. 5, 2024, 5:11 p.m.
    Status: Investigating
    Update: Multiple Engineering teams are continuing to investigate the issue. Customers may also encounter errors when managing other services and using the API at this time. We will post another update soon.
  • Time: Aug. 5, 2024, 5:05 p.m.
    Status: Investigating
    Update: As of 16:30 UTC, Our Engineering team is investigating an issue with processing of events in all regions. During this time users may experience failed events on Droplets. We apologize for the inconvenience and will share an update once we have more information.

Check the status of similar companies and alternatives to DigitalOcean

Hudl
Hudl

Systems Active

OutSystems
OutSystems

Systems Active

Postman
Postman

Systems Active

Mendix
Mendix

Systems Active

Bandwidth
Bandwidth

Issues Detected

DataRobot
DataRobot

Systems Active

Grafana Cloud
Grafana Cloud

Systems Active

SmartBear Software
SmartBear Software

Systems Active

Test IO
Test IO

Systems Active

Copado Solutions
Copado Solutions

Systems Active

CircleCI
CircleCI

Systems Active

LaunchDarkly
LaunchDarkly

Systems Active

Frequently Asked Questions - DigitalOcean

Is there a DigitalOcean outage?
The current status of DigitalOcean is: Minor Outage
Where can I find the official status page of DigitalOcean?
The official status page for DigitalOcean is here
How can I get notified if DigitalOcean is down or experiencing an outage?
To get notified of any status changes to DigitalOcean, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of DigitalOcean every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does DigitalOcean do?
Cloud hosting solutions designed for small and mid-sized businesses that are easy to use and can be expanded as needed.