Company Logo

Is there an Jira Service Management outage?

Jira Service Management status: Systems Active

Last checked: 5 minutes ago

Get notified about any outages, downtime or incidents for Jira Service Management and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

Jira Service Management outages and incidents

Outage and incident data over the last 30 days for Jira Service Management.

There have been 7 outages or incidents for Jira Service Management in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for Jira Service Management

Outlogger tracks the status of these components for Xero:

Assist Active
Authentication and User Management Active
Automation for Jira Active
Jira Service Management Email Requests Active
Jira Service Management Web Active
Opsgenie Alert Flow Active
Opsgenie Alert Flow Active
Opsgenie Incident Flow Active
Opsgenie Incident Flow Active
Purchasing & Licensing Active
Service Portal Active
Signup Active
Component Status
Assist Active
Authentication and User Management Active
Automation for Jira Active
Jira Service Management Email Requests Active
Jira Service Management Web Active
Opsgenie Alert Flow Active
Opsgenie Alert Flow Active
Opsgenie Incident Flow Active
Opsgenie Incident Flow Active
Purchasing & Licensing Active
Service Portal Active
Signup Active

Latest Jira Service Management outages and incidents.

View the latest incidents for Jira Service Management and check for official updates:

Updates:

  • Time: Aug. 16, 2024, 12:54 a.m.
    Status: Postmortem
    Update: ### Summary From `12/Aug/24 10:39 UTC` to `13/Aug/24 14:38 UTC` some customers experienced degraded performance on Issue View in Jira. This was caused by a data processing compatibility problem between a cache, the underlying database, and the new deployment. Due to a slow increase in failure rate and a small initial surface area of impact, the problem didn’t immediately trigger our continuous error monitoring and alerting. Once we identified the issue it was resolving itself through self-healing mechanisms in the infrastructure. However, in a few outlier cases, we had to intervene with tenant specific cache recalculations. All but 6 tenants were fully remediated by `12/Aug/24 21:30 UTC`. The issue occurred on the read layer of our architecture so while customer experience was degraded, there was no data loss. ### **IMPACT** About 1% of instances in were impacted over the lifetime of the incident. Users on those impacted instances would have experienced degradation when loading Issue View in a specific scenario. This was when a Multi Select Custom Field is enabled on an Issue and where that Custom Field also had a Default Value set. ### **ROOT CAUSE** We introduced a change in our code which caused processing of Custom Fields in specific configurations to fail. This prevented Issue View from loading issues for projects with the above specific configuration applied. This problem occurred because of different representations of the data in the database, in the code base, and in the cache in the production environment. These multiple representations caused an exception when translating the data from one representation to the next. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** The problem largely self healed as the cache expired and was refreshed with compatible data; however, we chose to force cache re-computation for affected tenants in order to expedite this process. We chose not to roll back the deployment at that point as that would have created the reverse compatibility issue with the already healed tenants. Instead, we focussed on forward fixing with a hotfix and accelerating remediation for still affected tenants. For a small very number of tenants, forced re-computation did not immediately rectify and we had to roll forward a code hotfix to remediate. We are prioritizing the following improvement actions to avoid repeating this type of incident: * The already deployed hot fix to stop this particular problem recurring. * A series of tests for this class of issue in our read layer. * A review of monitoring to detect these fine grained problems before they cause more customer impact. We apologize for any disruption this issue may have caused and are taking steps to help ensure it does not occur again. Thanks, Atlassian Customer Support
  • Time: Aug. 13, 2024, 3:34 p.m.
    Status: Resolved
    Update: The fix for the incident has been deployed and the issue has been resolved.
  • Time: Aug. 13, 2024, 7:09 a.m.
    Status: Identified
    Update: The root cause was identified and the hotfix is being deployed.
  • Time: Aug. 13, 2024, 12:49 a.m.
    Status: Investigating
    Update: The issue was largely remediated. Team is working on fixing on a few remaining tenants.
  • Time: Aug. 12, 2024, 7:04 p.m.
    Status: Investigating
    Update: Team is investigating a potential root cause for the issue. Fix is yet to be rolled out.
  • Time: Aug. 12, 2024, 6:15 p.m.
    Status: Investigating
    Update: Team is actively working on a fix to mitigate the issue. Root cause is still to be determined.
  • Time: Aug. 12, 2024, 5:46 p.m.
    Status: Investigating
    Update: Team is still working on a potential fix to mitigate the issue while investigation into root cause continues.
  • Time: Aug. 12, 2024, 5:09 p.m.
    Status: Investigating
    Update: The issue is still ongoing at the moment, across multiple products. The team is actively working on a fix.
  • Time: Aug. 12, 2024, 5:07 p.m.
    Status: Investigating
    Update: We are continuing to investigate this issue.
  • Time: Aug. 12, 2024, 3:51 p.m.
    Status: Investigating
    Update: We've identified an issue in Jira where, upon loading issues in multiple projects, a blank screen is shown. The team is currently working on identifying the root cause and resolving it.

Updates:

  • Time: Aug. 26, 2024, 9:40 a.m.
    Status: Postmortem
    Update: ### Summary From August 9, 2024, 14:49 UTC until August 10, 2024, 00:55 UTC, Atlassian customers using Jira and Jira Service Management products could not use JSM Assets objects in their workflows. The event was triggered by an out-of-cycle deployment of our services. There were no functional changes included in the service, however, the deployment impacted multiple customers across Europe, North America, and Asia Pacific. The incident was detected within 82 minutes by Staff \(Customer reports\) and mitigated by restarting the JSM Assets service, which put Atlassian systems into a known good state. The total time to resolution was about 4 hours for most customers, with one having a 10h prolongued outage. ### **IMPACT** The overall impact was between August 9, 2024, 14:49 UTC and August 10, 2024, 00:55 UTC on Jira and Jira Service Management products\_. The Incident caused service disruption to\_ Europe, North America, and Asia Pacific customers only where they failed to leverage JSM Assets objects in their workflow. Jira users faced disruption when looking to: * View Assets objects associated with issues after loading their issues, lists of issues, or boards in browser * View Gadget results which relied on AQL in their JQL * Interact with JQL\+AQL via API * Transition issues which required Assets object validation Jira Service Management users faced disruption when looking to: * Create issues in JSM Customer Portal * View Assets objects on Requests in JSM Customer Portal * Fill JSM Form relying on Assets * Configure Asset fields and JSM Forms with Assets * Refresh queues based on AQL ### **ROOT CAUSE** The issue was caused by a race condition in refreshing authorization tokens. As a result, the products above could not retrieve access tokens and resource identifiers to support customer features, and the users received HTTP 500 errors. More specifically, our out-of-cycle deployment triggered an authorization token refresh for a downstream service serving customer traffic at the time. As our service was processing traffic, it sought to update authorization tokens, and in some cases, the tokens partially persisted within the customer context. Subsequent calls for the affected customer failed due to a mismatch of authorization tokens. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the change was related to a particular kind of legacy case that was not picked up by our automated continuous deployment suites and manual test scripts. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Removing the need to cache authorization tokens during service runtime. Furthermore, we deploy our changes progressively \(by cloud region\) to avoid broad impact. In this case, our detection instrumentation could have worked better. To minimize the impact of breaking changes to our environments, we will implement additional preventative measures such as: * Alerting on high amount of error rates over short spans of time. We apologize to customers whose services were impacted by this incident and are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
  • Time: Aug. 10, 2024, 9:05 a.m.
    Status: Resolved
    Update: We are resolving this incident as we have mitigated the root cause for all customers.
  • Time: Aug. 9, 2024, 10:32 p.m.
    Status: Monitoring
    Update: The issue has been mitigated for all the impacted customers. We are continuing to investigate the root cause.
  • Time: Aug. 9, 2024, 10:19 p.m.
    Status: Identified
    Update: The issue has been mitigated for all the impacted customers. We are continuing to investigate the root cause
  • Time: Aug. 9, 2024, 7:17 p.m.
    Status: Identified
    Update: The issue has been mitigated for all the impacted customers. We are continuing to investigate the root cause
  • Time: Aug. 9, 2024, 4:55 p.m.
    Status: Identified
    Update: We have identified the root cause and have started deploying the mitigation steps
  • Time: Aug. 9, 2024, 4:35 p.m.
    Status: Identified
    Update: We are investigating an issue with Jira Service Management, where some customers are unable to access their Assets. The impact started around 15:00 UTC. The team is investigating the root cause and we are in the process of mitigation

Updates:

  • Time: Aug. 21, 2024, 1:25 a.m.
    Status: Postmortem
    Update: ### Summary From August 9, 2024, 14:49 UTC until August 10, 2024, 00:55 UTC, Atlassian customers using Jira and Jira Service Management products could not use JSM Assets objects in their workflows. The event was triggered by an out-of-cycle deployment of our services. There were no functional changes included in the service, however, the deployment impacted multiple customers across Europe, North America, and Asia Pacific. The incident was detected within 82 minutes by Staff \(Customer reports\) and mitigated by restarting the JSM Assets service, which put Atlassian systems into a known good state. The total time to resolution was about 4 hours for most customers, with one having a 10h prolongued outage. ### **IMPACT** The overall impact was between August 9, 2024, 14:49 UTC and August 10, 2024, 00:55 UTC on Jira and Jira Service Management products_. The Incident caused service disruption to_ Europe, North America, and Asia Pacific customers only where they failed to leverage JSM Assets objects in their workflow. Jira users faced disruption when looking to: * View Assets objects associated with issues after loading their issues, lists of issues, or boards in browser * View Gadget results which relied on AQL in their JQL * Interact with JQL\+AQL via API * Transition issues which required Assets object validation Jira Service Management users faced disruption when looking to: * Create issues in JSM Customer Portal * View Assets objects on Requests in JSM Customer Portal * Fill JSM Form relying on Assets * Configure Asset fields and JSM Forms with Assets * Refresh queues based on AQL ### **ROOT CAUSE** The issue was caused by a race condition in refreshing authorization tokens. As a result, the products above could not retrieve access tokens and resource identifiers to support customer features, and the users received HTTP 500 errors. More specifically, our out-of-cycle deployment triggered an authorization token refresh for a downstream service serving customer traffic at the time. As our service was processing traffic, it sought to update authorization tokens, and in some cases, the tokens partially persisted within the customer context. Subsequent calls for the affected customer failed due to a mismatch of authorization tokens. ### **REMEDIAL ACTIONS PLAN & NEXT STEPS** We know that outages impact your productivity. While we have several testing and preventative processes in place, this specific issue wasn’t identified because the change was related to a particular kind of legacy case that was not picked up by our automated continuous deployment suites and manual test scripts. We are prioritizing the following improvement actions to avoid repeating this type of incident: * Removing the need to cache authorization tokens during service runtime. Furthermore, we deploy our changes progressively \(by cloud region\) to avoid broad impact. In this case, our detection instrumentation could have worked better. To minimize the impact of breaking changes to our environments, we will implement additional preventative measures such as: * Alerting on high amount of error rates over short spans of time. We apologize to customers whose services were impacted by this incident and are taking immediate steps to improve the platform’s performance and availability. Thanks, Atlassian Customer Support
  • Time: Aug. 9, 2024, 7:16 a.m.
    Status: Resolved
    Update: Between 08:10 UTC to 12:05 UTC, certain customers were unable to load Assets for Jira Service Management. The issue has been resolved and the service is operating normally.
  • Time: Aug. 8, 2024, 4 p.m.
    Status: Monitoring
    Update: We have faced an incident with Assets functionality on JSM where customers were unable to load Assets. The impact started at 6.10pm AEST and was resolved at 10.05pm AEST. All the services are back up now and are functioning properly. Team is continuing to monitor for any further issues.
  • Time: Aug. 8, 2024, 10:17 a.m.
    Status: Investigating
    Update: We are currently investigating an issue with Assets where the customers are unable to access their Assets. The impact started around 2pm IST. At the moment, the issue seems limited to Europe Region. Team is currently investigating the root cause and trying to mitigate for impacted customers. We will share more update as soon as we can.

Updates:

  • Time: July 30, 2024, 8:05 p.m.
    Status: Resolved
    Update: We have resolved the underlying issue and prevented it from affecting any more customers in the future. We have confirmed that only a small subset of customers were affected, and we’re dedicated to reaching out to each one of the affected customers.
  • Time: July 30, 2024, 6:23 p.m.
    Status: Identified
    Update: Our team has identified the root cause and confirmed the impact is limited to a small subset of customers. The team is currently working to determine the best course of action. We will provide more details within the next hour or so.
  • Time: July 30, 2024, 3:41 p.m.
    Status: Investigating
    Update: Our team continues investigating the root cause of this issue and we are gathering the affected customers. We will provide more details within the next hour or so.
  • Time: July 30, 2024, 2:27 p.m.
    Status: Identified
    Update: We are investigating an issue with SLA data loss that is impacting some JSM Cloud customers. We will provide more details within the next hour.

Updates:

  • Time: July 29, 2024, 11:29 a.m.
    Status: Resolved
    Update: Between 28 July 2024, 23:00 UTC, and 29 July 2024, 10:22 UTC, some customers experienced performance degradation issues for Jira and JSM. The root cause was a problem with the propagation of configuration in our system during the migration of one of the database instances hosting your Jira cloud site. This caused the Jira application not to correctly balance load against database nodes within the database cluster, leading to CPU saturation of the database. We have deployed a fix to mitigate the issue and have verified that the services have recovered. The conditions that cause the bug have been addressed and we're actively working on a permanent fix. The issue has been resolved and the service is operating normally.
  • Time: July 29, 2024, 10:55 a.m.
    Status: Monitoring
    Update: We have identified the root cause of the performance degradation and have mitigated the problem. We are now monitoring this closely.
  • Time: July 29, 2024, 10:04 a.m.
    Status: Identified
    Update: We continue to work on resolving the performance degradation for Jira and JSM. We have identified the root cause and recovery is complete for most sites. We will provide more details within the next hour.
  • Time: July 29, 2024, 9:01 a.m.
    Status: Identified
    Update: We continue to work on resolving the performance degradation for Jira and JSM. We have identified the root cause and expect recovery for most of the sites. We will provide more details within the next hour.
  • Time: July 29, 2024, 8:27 a.m.
    Status: Investigating
    Update: We are investigating cases of degraded performance for Jira and JSM Cloud customers. We will provide more details within the next hour.

Check the status of similar companies and alternatives to Jira Service Management

Atlassian
Atlassian

Systems Active

Zoom
Zoom

Issues Detected

Dropbox
Dropbox

Systems Active

Miro
Miro

Systems Active

TeamViewer
TeamViewer

Systems Active

Lucid Software
Lucid Software

Systems Active

Restaurant365
Restaurant365

Systems Active

Mural
Mural

Systems Active

Zenefits
Zenefits

Systems Active

Retool
Retool

Systems Active

Splashtop
Splashtop

Systems Active

Hiver
Hiver

Systems Active

Frequently Asked Questions - Jira Service Management

Is there a Jira Service Management outage?
The current status of Jira Service Management is: Systems Active
Where can I find the official status page of Jira Service Management?
The official status page for Jira Service Management is here
How can I get notified if Jira Service Management is down or experiencing an outage?
To get notified of any status changes to Jira Service Management, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of Jira Service Management every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does Jira Service Management do?
This service allows users to manage customer insights and product ideas, create roadmaps, and develop impactful products using Jira.