Last checked: 58 seconds ago
Get notified about any outages, downtime or incidents for Atlassian Bitbucket and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Atlassian Bitbucket.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
API | Active |
Authentication and user management | Active |
Email delivery | Active |
Git LFS | Active |
Git via HTTPS | Active |
Git via SSH | Active |
Pipelines | Active |
Purchasing & Licensing | Active |
Signup | Active |
Source downloads | Active |
Webhooks | Active |
Website | Active |
View the latest incidents for Atlassian Bitbucket and check for official updates:
Description: ### Summary On March 11, 2024, between 20:29 UTC and 21:41 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This impact was caused by an issue with Bitbucket’s database, resulting in connection pools becoming saturated, increasing response times, and a ramp-up of requests timing out completely. ### **IMPACT** Customers who were impacted experienced increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were also affected. ### **ROOT CAUSE** The incident was caused by a bug in the version of database software being used. With Bitbucket’s query patterns, if certain processes do not run frequently enough, eventually issues can arise that can result in poor query planner performance. Due to this bug, our process configuration, which has been tuned to our specific workload previously, is no longer proving to be effective. While the appropriate tuning is determined, we have implemented a system to trigger that process as soon any issues are detected. We are confident this will prevent a repeat incident while we determine an appropriate threshold and cadence. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Vacuuming immediately when the defect is detected. * Appropriately tuning autovacuum settings to meet the requirements of our workload. * Upgrading our database version as soon as the fix becomes available. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Status: Postmortem
Impact: Major | Started At: March 11, 2024, 9:12 p.m.
Description: ### Summary On March 11, 2024, between 20:29 UTC and 21:41 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This impact was caused by an issue with Bitbucket’s database, resulting in connection pools becoming saturated, increasing response times, and a ramp-up of requests timing out completely. ### **IMPACT** Customers who were impacted experienced increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were also affected. ### **ROOT CAUSE** The incident was caused by a bug in the version of database software being used. With Bitbucket’s query patterns, if certain processes do not run frequently enough, eventually issues can arise that can result in poor query planner performance. Due to this bug, our process configuration, which has been tuned to our specific workload previously, is no longer proving to be effective. While the appropriate tuning is determined, we have implemented a system to trigger that process as soon any issues are detected. We are confident this will prevent a repeat incident while we determine an appropriate threshold and cadence. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Vacuuming immediately when the defect is detected. * Appropriately tuning autovacuum settings to meet the requirements of our workload. * Upgrading our database version as soon as the fix becomes available. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Status: Postmortem
Impact: Major | Started At: March 11, 2024, 9:12 p.m.
Description: Between 28th Feb 2024 23:15 UTC to 29th Feb 2024 02:41 UTC, we experienced issue with new product purchasing for all products. All new sign up products have been successfully provision and confirmed issue has been resolved and the service is operating normally.
Status: Resolved
Impact: Minor | Started At: Feb. 29, 2024, 1:27 a.m.
Description: Between 28th Feb 2024 23:15 UTC to 29th Feb 2024 02:41 UTC, we experienced issue with new product purchasing for all products. All new sign up products have been successfully provision and confirmed issue has been resolved and the service is operating normally.
Status: Resolved
Impact: Minor | Started At: Feb. 29, 2024, 1:27 a.m.
Description: ### Summary On February 22, 2024, between 7:22 UTC and 13:30 UTC, Atlassian customers using Bitbucket Cloud faced degradation to its website and APIs. This was caused by the vacuum process not being run frequently enough on our high-traffic database tables, which impaired the database’s ability to handle requests. This resulted in connection pools becoming saturated, response times increasing, and a ramp-up of requests timing out completely. After the database recovered at 13:30 UTC, Bitbucket Pipelines experienced build scheduling delays as it processed the backlog of jobs. Additional resources were added to Bitbucket Pipelines and the backlog was cleared in full by 17:30 UTC. ### **IMPACT** Customers who were impacted experienced significant delays with running Bitbucket Pipelines and increased latency when accessing the [bitbucket.org](http://bitbucket.org/) website and APIs during the duration of the incident. Git requests over HTTPS and SSH were unaffected. ### **ROOT CAUSE** The incident was caused by an issue during the routine autovacuuming of our active database tables, which impaired its ability to serve requests. This led to slowdowns that impacted a variety of Bitbucket services, including the queuing of a large backlog of unscheduled pipelines. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to reduce recovery time, limit impact, and avoid repeating these types of incidents in the future: * Reconfigure vacuuming threshold for high write activity database tables. * Adjust alert thresholds to proactively catch this behavior earlier and reduce potential impact. * Tuning autoscaling and load shedding behavior for Pipelines services and increasing build runner capacity. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Status: Postmortem
Impact: Major | Started At: Feb. 22, 2024, 2:45 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.