Company Logo

Is there an CircleCI outage?

CircleCI status: Systems Active

Last checked: 5 minutes ago

Get notified about any outages, downtime or incidents for CircleCI and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

CircleCI outages and incidents

Outage and incident data over the last 30 days for CircleCI.

There have been 6 outages or incidents for CircleCI in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for CircleCI

Outlogger tracks the status of these components for Xero:

Artifacts Active
Billing & Account Active
CircleCI Insights Active
CircleCI Releases Active
CircleCI UI Active
CircleCI Webhooks Active
Docker Jobs Active
Machine Jobs Active
macOS Jobs Active
Notifications & Status Updates Active
Pipelines & Workflows Active
Runner Active
Windows Jobs Active
AWS Active
Google Cloud Platform Google Cloud DNS Active
Google Cloud Platform Google Cloud Networking Active
Google Cloud Platform Google Cloud Storage Active
Google Cloud Platform Google Compute Engine Active
mailgun API Active
mailgun Outbound Delivery Active
mailgun SMTP Active
OpenAI Active
Atlassian Bitbucket API Active
Atlassian Bitbucket Source downloads Active
Atlassian Bitbucket SSH Active
Atlassian Bitbucket Webhooks Active
Docker Authentication Active
Docker Hub Active
Docker Registry Active
GitHub API Requests Active
GitHub Git Operations Active
GitHub Packages Active
GitHub Pull Requests Active
GitHub Webhooks Active
GitLab Active
Component Status
Artifacts Active
Billing & Account Active
CircleCI Insights Active
CircleCI Releases Active
CircleCI UI Active
CircleCI Webhooks Active
Docker Jobs Active
Machine Jobs Active
macOS Jobs Active
Notifications & Status Updates Active
Pipelines & Workflows Active
Runner Active
Windows Jobs Active
Active
AWS Active
Google Cloud Platform Google Cloud DNS Active
Google Cloud Platform Google Cloud Networking Active
Google Cloud Platform Google Cloud Storage Active
Google Cloud Platform Google Compute Engine Active
mailgun API Active
mailgun Outbound Delivery Active
mailgun SMTP Active
OpenAI Active
Active
Atlassian Bitbucket API Active
Atlassian Bitbucket Source downloads Active
Atlassian Bitbucket SSH Active
Atlassian Bitbucket Webhooks Active
Docker Authentication Active
Docker Hub Active
Docker Registry Active
GitHub API Requests Active
GitHub Git Operations Active
GitHub Packages Active
GitHub Pull Requests Active
GitHub Webhooks Active
GitLab Active

Latest CircleCI outages and incidents.

View the latest incidents for CircleCI and check for official updates:

Updates:

  • Time: Oct. 22, 2024, 6:22 p.m.
    Status: Resolved
    Update: Job start times have recovered.
  • Time: Oct. 22, 2024, 6:10 p.m.
    Status: Monitoring
    Update: We have implemented a fix and are seeing jobs recover. We will continue to monitor the change and update the status here.
  • Time: Oct. 22, 2024, 6:04 p.m.
    Status: Identified
    Update: We have identified the issue and are working on deploying a fix.
  • Time: Oct. 22, 2024, 6 p.m.
    Status: Investigating
    Update: We are investigating reports that jobs are not starting.

Updates:

  • Time: Nov. 15, 2024, 1:26 p.m.
    Status: Postmortem
    Update: ## Summary: On October 22, 2024, from 14:45 to 15:52 and again from 17:41 to 18:22 UTC, CircleCI customers experienced failures on new job submissions as well as failures on jobs that were in progress. A sudden increase in the number of tasks completing simultaneously and requests to upload artifacts from jobs overloaded the service responsible for managing job output. On October 28, 2024, from 13:27 to 14:13 and from 14:58 to 15:50, CircleCI customers experienced a recurrence of these effects due to a similar cause. During these sets of incidents, customers would have experienced their jobs failing to start with an infrastructure failure. Jobs that were already in progress also failed with an infrastructure failure. We want to thank our customers for your patience and understanding as we worked to resolve these incidents. The original status pages for the incidents on October 22 can be found [here](https://status.circleci.com/incidents/6yjv79g764yc) and [here](https://status.circleci.com/incidents/0crxbhkflndc). The status pages for incidents on October 28 can be found [here](https://status.circleci.com/incidents/xk37ycndxbhc) and [here](https://status.circleci.com/incidents/8ktdwlsf2lm8). ## What Happened: \(All times UTC\) On October 22, 2024, at 14:45 there was a sudden increase of customer tasks completing at the same time within CircleCI. In order to record each of these task end events, including the amount of storage the task used, the system that manages task state \(distributor\) made calls to our internal API gateway, which subsequently queried the system responsible for storing job output \(output service\). At this point, output service became overwhelmed with requests; although some requests were handled successfully, the vast majority were delayed before finally receiving a `499 Client Closed Request` error response. ![](https://global.discourse-cdn.com/circleci/original/3X/2/b/2b68322aaf27124eb5ae63a15bc0f8f2118c3f7b.png) `Distributor task end calls to the internal API gateway` Additionally, at 14:50, output service received an influx of artifact upload requests, further straining resources in the service. An incident was officially declared at 14:57. Output service was scaled horizontally at 15:16 to handle the additional load it was receiving. Internal health checks began to recover at 15:25, and we continued to monitor output service until incoming requests returned to normal levels. The incident was resolved at 15:52 and we kept output service horizontally scaled. At 17:41, output service received another sharp increase in requests to upload artifacts and was unable to keep up with the additional load, causing jobs to fail again. An incident was declared at 17:57. Because output service was still horizontally scaled from the initial incident, it automatically recovered by 18:00. As a proactive measure, we further scaled output service horizontally at 18:02. We continued to monitor our systems until the incident was resolved at 18:22. Following incident resolution, we continued our investigation and uncovered on October 25 that our internal API gateway was configured with low values for the maximum number of connections allowed to each of the services that experienced increased load on October 22. We immediately increased these values so that the gateway could handle increased volume of task end events moving forward. Despite these improvements, on October 28, 2024, at 13:27, customer jobs started to fail in the same way as they previously did on October 22. An incident was officially declared at 13:38. By 13:48, the system automatically recovered without any intervention and the incident was resolved at 14:13. We continued to investigate the root cause of the delays and failures, but at 14:45 customer jobs started to fail again in the same way. We declared another incident at 14:50. In order to reduce the load on output service, we removed the retry logic when requesting storage used per task from output service. This allowed tasks to complete even if storage used could not be retrieved \(to the customer’s benefit\). Additionally, we scaled distributor horizontally at 15:19 in order to handle the increased load. At 15:21, our systems began to recover. We continued to monitor and resolved the incident at 15:51. We returned to our investigation into the root cause of this recurring behavior and discovered that there was an additional client in distributor that was configured with a low value for maximum number of connections to our internal API gateway. We increased this value at 17:33. ## Future Prevention and Process Improvement: Following the remediation on October 28, we conducted an audit of **all** of the HTTP clients in the execution environment and proactively increased those that were similarly configured to ones in the internal API gateway and distributor. Additionally, we identified a gap in observability with these HTTP clients that prevented us from identifying the root cause of these sets of incidents sooner. We immediately added additional observability to all of the clients in order to enable better alerting if connections pools were to become exhausted again in the future.
  • Time: Oct. 22, 2024, 3:52 p.m.
    Status: Resolved
    Update: The issue affecting job processing has now been fully resolved. All systems are operational and jobs are processing as expected. We thank you for your patience during this incident and apologize for any inconvenience caused. If you continue to experience any issues, please contact our customer support team.
  • Time: Oct. 22, 2024, 3:39 p.m.
    Status: Monitoring
    Update: We are beginning to see recovery from the issue that prevented jobs from starting. We will continue to monitor the situation closely. Thank you for your patience.
  • Time: Oct. 22, 2024, 3:32 p.m.
    Status: Identified
    Update: We have identified the issue that is causing jobs to not start as expected and are currently working on implementing a fix to mitigate the problem. We will provide more updates as information becomes available and we appreciate your continued patience.
  • Time: Oct. 22, 2024, 3:15 p.m.
    Status: Investigating
    Update: We are continuing to investigate this issue.
  • Time: Oct. 22, 2024, 3:02 p.m.
    Status: Investigating
    Update: We are investigating issues starting jobs. We are investigating this issue.

Updates:

  • Time: Oct. 21, 2024, 6:41 p.m.
    Status: Resolved
    Update: No other reports have been received and unable to reproduce. Investigation concluded.
  • Time: Oct. 21, 2024, 6:19 p.m.
    Status: Investigating
    Update: We're currently investigating a report of the project pipeline view showing "No Workflow" for pipelines whose workflows were created. We will update as soon as we know more details.

Updates:

  • Time: Sept. 27, 2024, 9:39 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: Sept. 27, 2024, 9:22 p.m.
    Status: Monitoring
    Update: Jobs using DockerHub images are no longer failing at elevated rates. Customers should rerun any jobs impacted by this incident.
  • Time: Sept. 27, 2024, 8:49 p.m.
    Status: Identified
    Update: Follow DockerHub's status updates for more information: https://www.dockerstatus.com/pages/incident/533c6539221ae15e3f000031/66f719d334b03d6913767794
  • Time: Sept. 27, 2024, 8:45 p.m.
    Status: Identified
    Update: This includes CircleCI's convenience images ("cimg").

Updates:

  • Time: Sept. 19, 2024, 8:03 p.m.
    Status: Resolved
    Update: The incident where a performance problem prevented some pipelines from being created properly has now been resolved. We appreciate your patience and understanding as we worked through this incident. Please reach out to our support team for any further questions or if you experience any further issues.
  • Time: Sept. 19, 2024, 7:48 p.m.
    Status: Monitoring
    Update: We have corrected a performance problem that prevented pipelines from being created properly, and will continue to monitor our platform for stability. Affected users should re-trigger affected pipelines through their VCS provider or by using the "Trigger Pipeline" button in the CircleCI UI.
  • Time: Sept. 19, 2024, 7:27 p.m.
    Status: Identified
    Update: We have identified the cause of the problem. We are working to remediate the issue.
  • Time: Sept. 19, 2024, 7:05 p.m.
    Status: Investigating
    Update: We are investigating an issue where pipelines are not created for some users.

Check the status of similar companies and alternatives to CircleCI

Hudl
Hudl

Systems Active

OutSystems
OutSystems

Systems Active

Postman
Postman

Systems Active

Mendix
Mendix

Systems Active

DigitalOcean
DigitalOcean

Issues Detected

Bandwidth
Bandwidth

Systems Active

DataRobot
DataRobot

Systems Active

Grafana Cloud
Grafana Cloud

Systems Active

SmartBear Software
SmartBear Software

Systems Active

Test IO
Test IO

Systems Active

Copado Solutions
Copado Solutions

Systems Active

LaunchDarkly
LaunchDarkly

Systems Active

Frequently Asked Questions - CircleCI

Is there a CircleCI outage?
The current status of CircleCI is: Systems Active
Where can I find the official status page of CircleCI?
The official status page for CircleCI is here
How can I get notified if CircleCI is down or experiencing an outage?
To get notified of any status changes to CircleCI, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of CircleCI every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does CircleCI do?
Access top-notch CI/CD for any platform, on our cloud or your own infrastructure, at no cost.