Last checked: 2 minutes ago
Get notified about any outages, downtime or incidents for ABBYY and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for ABBYY.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
FlexiCapture Cloud AU | Active |
FlexiCapture Cloud EU | Active |
FlexiCapture Cloud EU2 | Active |
FlexiCapture Cloud US | Active |
Proof of Identity US | Active |
Timeline AU | Active |
Timeline EU | Active |
Timeline JP | Active |
Timeline US | Active |
Vantage AU | Active |
Vantage EU | Active |
Vantage US | Active |
View the latest incidents for ABBYY and check for official updates:
Description: The planned update of ABBYY Vantage, in the EU Cloud, to version 2.6.0, originally scheduled for 8 June 2024 at 16:00 CET, has been postponed to a later date. Although we apologize for any inconvenience, this schedule change has been made to ensure that you receive the best possible experience. The specific date and time for the EU update will be provided to you next week. Service reliability is a top priority for ABBYY, therefore we are continuously working on improving our systems. If you have any questions or feedback, please feel free to contact your ABBYY technical contact directly or the ABBYY Support team via webform at https://support.abbyy.com.
Status: Resolved
Impact: None | Started At: June 7, 2024, 11:37 a.m.
Description: **Dear Customer,** On May 31, 2024, ABBYY Vantage US experienced an interruption in the storage service operation, resulting in an outage of the Vantage platform. We are pleased to confirm that the issues have been mitigated, and the service is now fully functional again. Please review the following incident Root Cause Analysis \(RCA\) information: **Cloud instance** * United States **Incident timeframe** * May 31, 2024 * 14:30 – 16:30 UTC **Incident status** * Fully mitigated **Customer impact** * The platform was unable to complete new or existing processing transactions. * Transactions could fail with the error message _“Original error: \[BrokenCircuitException: The circuit is now open and is not allowing calls.\]; Original error type: /app/bin/x86\_64/OcrEngine.Worker.Base.dll”_ * Manual Review tasks could not be processed due to the error message _“Polly.CircuitBreaker.BrokenCircuitException: The circuit is now open and is not allowing calls. ---> System.Net.Http.HttpRequestException: Connection refused \(app-st-storage.app.svc.cluster.local:80\)”_ * Skills that were being edited could become unresponsive, or edits could be lost. **Incident history** * 14:30 UTC: The service health monitoring system was triggered, notifying the on-duty team of an increased rate of storage authentication errors from one pod. The team began investigations to mitigate the incident. * 15:00 UTC: The team identified an issue with credentials being used for storage authentication, attempted to resolve the issue and restart the affected pod, but was unsuccessful. * 16:07 UTC: Following internal playbooks for incident mitigation, all pods were restarted, which led to additional storage authentication errors from other pods. * 16:30 UTC: The root cause of the failing storage authentication was identified as pods receiving invalid storage credentials. After resolving the credentials issue and restarting the storage service, all pods started successfully, and the platform returned to normal operation. The incident was considered fully mitigated. **Root cause** * Incorrect storage authentication credentials were received by the pods after the reconfiguration of the service during a regular password rotation procedure. **Mitigation measures** * Corrected the configured source of credentials used for storage authentication. * Restarted the storage service and cleaned up the cached authentication credentials. **Prevention measures** * Optimize internal playbooks and define additional confirmation steps to ensure the correctness of credentials updated during the password rotation procedure in the short term. * Implement passwordless authentication for connections to underlying service infrastructure components. We apologize for any inconvenience and most importantly, for the potential impact on your business. We are committed to preventing such issues in the future and will continue working on improving our infrastructure and monitoring solutions. Thank you for using ABBYY Vantage! If you have any questions or feedback, please feel free to contact our support team via the [Help Center](https://support.abbyy.com) portal. Yours faithfully, [ABBYY Vantage](https://vantage-eu.abbyy.com/) Team
Status: Postmortem
Impact: Critical | Started At: May 31, 2024, 3:56 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Critical | Started At: May 23, 2024, 6:55 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Critical | Started At: May 22, 2024, 4:21 p.m.
Description: **Dear Customer,** On May 17, 2024, ABBYY FlexiCapture Cloud AU experienced a severe slowdown in document processing. We are pleased to confirm that the issues have been mitigated and the service is now fully functional again. Please review the following incident Root Cause Analysis \(RCA\) information: **Cloud instance** * Australia **Incident timeframe** * May 17, 2024 * 06:10 – 09:30 UTC **Incident status** * Fully mitigated **Customer impact** * The processing of documents and tasks was performed with significant delays. **Incident history** * 06:10 UTC: The number of pending processing tasks in the queue started growing. * 07:00 UTC: The service health monitoring system was triggered to notify the on-duty team of the unexpectedly grown task processing queue. * 07:15 UTC: The on-duty team started identifying the root cause and mitigating the incident according to internal playbooks. * 08:05 UTC: The team decided to reconfigure the infrastructure and increase the number of Processing Stations as well as the number of CPUs available for task processing. * 08:20 UTC: It was observed that the newly added resources were immediately occupied by specific export tasks that could not be completed due to throttling and delayed the execution of other tasks. * 09:15 UTC: The root cause of the incident was identified, and the team started mitigation measures. * 09:30 UTC: The distribution of processing tasks returned to normal operation, the size of the tasks queue was reduced, and the incident was considered fully mitigated. **Root cause** * A defect in the throttling functionality of the service led to uncompleted export tasks having the highest priority, which occupied all CPU cores available for Processing Stations and delayed the execution of other types of tasks. **Mitigation measures** * The processing queue was adjusted to unblock the processing from the uncompleted export tasks that were delaying processing. **Prevention measures** * Improvements to the throttling functionality of the service have been implemented and deployed to the Australian production instance of ABBYY FlexiCapture Cloud. We apologize for any inconvenience and most of all, for the potential impact on your business. We are committed to preventing the issue in the future and will continue working on improving the infrastructure and our monitoring solutions. Thank you for using ABBYY FlexiCapture Cloud! If you have any questions or feedback, please feel free to contact our support team via the [Help Center](https://support.abbyy.com) portal. Yours faithfully, [ABBYY FlexiCapture Cloud](https://www.abbyy.com/flexicapture-cloud-login) Team
Status: Postmortem
Impact: Minor | Started At: May 17, 2024, 7:51 a.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.