Last checked: 4 minutes ago
Get notified about any outages, downtime or incidents for Confluence and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Confluence.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
Administration | Active |
Authentication and User Management | Active |
Cloud to Cloud Migrations - Copy Product Data | Active |
Comments | Active |
Confluence Automations | Active |
Create and Edit | Active |
Marketplace Apps | Active |
Notifications | Active |
Purchasing & Licensing | Active |
Search | Active |
Server to Cloud Migrations - Copy Product Data | Active |
Signup | Active |
View Content | Active |
Mobile | Active |
Android App | Active |
iOS App | Active |
View the latest incidents for Confluence and check for official updates:
Description: At around 4am UTC, about 40 percent of Forge app invocations experienced high latency, with a portion of the requests failing, during a 15 minute time window. The scaling of the instances was misconfigured following a new deployment of the service, which needed manual intervention which took a few minutes to resolve the issue. Timeline: - 2024-01-31 04:00 UTC: impact started - 2024-01-31 04:03 UTC: incident detected - 2024-01-31 04:15 UTC: the incident was resolved and the impact ended This issue is now resolved and Forge is fully operational. We apologize for any inconveniences this may have caused to our customers, partners, and our developer community.
Status: Resolved
Impact: None | Started At: Jan. 31, 2024, 6:03 a.m.
Description: At around 4am UTC, about 40 percent of Forge app invocations experienced high latency, with a portion of the requests failing, during a 15 minute time window. The scaling of the instances was misconfigured following a new deployment of the service, which needed manual intervention which took a few minutes to resolve the issue. Timeline: - 2024-01-31 04:00 UTC: impact started - 2024-01-31 04:03 UTC: incident detected - 2024-01-31 04:15 UTC: the incident was resolved and the impact ended This issue is now resolved and Forge is fully operational. We apologize for any inconveniences this may have caused to our customers, partners, and our developer community.
Status: Resolved
Impact: None | Started At: Jan. 31, 2024, 6:03 a.m.
Description: ### Summary On January 18, 2024, between 01:12 am UTC and 02:12 am UTC, Atlassian customers using Confluence Cloud were unable to access core product functionality, seeing degraded performance in the APAC region. The event was triggered by a deployment of a downstream dependency service which could not scale with the increase in traffic. The incident was detected within 18 minutes by an automated monitoring system and mitigated by scaling out nodes manually which put Atlassian systems into a known good state. The total time to resolution was about one hour. In response to this incident, we helped scale the service and put in a deployment block with the goal of preventing the service from being deployed to production again until the issue was resolved. On January 25, 2024, between 01:05 am UTC and 01:42 am UTC, a separate automated deployment process ran to deploy services that were not deployed in the previous seven days. This deployment also caused the dependent service to run at a lower-than-desired capacity, resulting in degraded performance in Confluence. The issue was detected within 10 minutes by an automated monitoring system and mitigated by scaling out nodes manually. The total time to resolution was about 37 minutes. ### **IMPACT** The overall impact was on January 18, 2024, between 01:12 am UTC and 02:12 am UTC, and then on January 25, 2024, between 01:05 am UTC and 01:42 am UTC. These incidents caused service disruption to customers in the APAC region where they may have noticed timeouts and failed requests for viewing pages, creating pages, and other functionality of Confluence Cloud. ### **ROOT CAUSE** The issue was caused by a deployment of a downstream service that had not scaled to meet the growing traffic. As a result, Confluence Cloud saw timeouts and errors in their requests and the users received HTTP 500 errors. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to avoid repeating this type of incident. * Ensure the right capacity for the target service pool during deployment. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Status: Postmortem
Impact: Minor | Started At: Jan. 26, 2024, 1:34 a.m.
Description: ### Summary On January 18, 2024, between 01:12 am UTC and 02:12 am UTC, Atlassian customers using Confluence Cloud were unable to access core product functionality, seeing degraded performance in the APAC region. The event was triggered by a deployment of a downstream dependency service which could not scale with the increase in traffic. The incident was detected within 18 minutes by an automated monitoring system and mitigated by scaling out nodes manually which put Atlassian systems into a known good state. The total time to resolution was about one hour. In response to this incident, we helped scale the service and put in a deployment block with the goal of preventing the service from being deployed to production again until the issue was resolved. On January 25, 2024, between 01:05 am UTC and 01:42 am UTC, a separate automated deployment process ran to deploy services that were not deployed in the previous seven days. This deployment also caused the dependent service to run at a lower-than-desired capacity, resulting in degraded performance in Confluence. The issue was detected within 10 minutes by an automated monitoring system and mitigated by scaling out nodes manually. The total time to resolution was about 37 minutes. ### **IMPACT** The overall impact was on January 18, 2024, between 01:12 am UTC and 02:12 am UTC, and then on January 25, 2024, between 01:05 am UTC and 01:42 am UTC. These incidents caused service disruption to customers in the APAC region where they may have noticed timeouts and failed requests for viewing pages, creating pages, and other functionality of Confluence Cloud. ### **ROOT CAUSE** The issue was caused by a deployment of a downstream service that had not scaled to meet the growing traffic. As a result, Confluence Cloud saw timeouts and errors in their requests and the users received HTTP 500 errors. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. We are prioritizing the following improvement actions to avoid repeating this type of incident. * Ensure the right capacity for the target service pool during deployment. We apologize to customers whose services were impacted during this incident; we are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
Status: Postmortem
Impact: Minor | Started At: Jan. 26, 2024, 1:34 a.m.
Description: We experienced an issue in Confluence Cloud when copying legacy pages the process would time out and display a blank page. The issue has been resolved and the service is operating normally.
Status: Resolved
Impact: Minor | Started At: Jan. 25, 2024, 7:24 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.