Last checked: 8 minutes ago
Get notified about any outages, downtime or incidents for SYNAQ and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for SYNAQ.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
SYNAQ Archive | Active |
SYNAQ Branding | Active |
SYNAQ Cloud Mail | Active |
SYNAQ Continuity | Active |
SYNAQ Q Portal | Active |
SYNAQ Securemail | Active |
View the latest incidents for SYNAQ and check for official updates:
Description: Summary and Impact to Customers On Monday 10th June from 1:38pm to Tuesday 11th June 4:55pm, SYNAQ Cloud Mail experienced a minor service incident which caused mail delays for a subset of clients. The resultant impact of the event was that certain users experienced mail delays of up to 2 hours for some of their mail. Root Cause and Solution The root cause of this event was due to a failed controller on a backend storage device. As a result of the failure, all data accessing the storage had to failover to a single data path. This failover caused an abnormal level of usage over the single data path, resulting in increased latency times for data reads and writes, which in turn caused mail delivery to users mailboxes to be delayed. In order to resolve this issue, the controller was replaced and dual paths were restored. As a result, mail delivery performance was restored to the affected users. Remediation Actions • Whilst the built-in redundancy on the storage array worked to prevent a complete loss of access to mailbox data, SYNAQ engineers are working with our storage vendor to increase the single data path capacity to handle failover load without impacting users.
Status: Postmortem
Impact: Minor | Started At: June 11, 2019, 6:53 a.m.
Description: Summary and Impact to Customers On Monday 10th June from 1:38pm to Tuesday 11th June 4:55pm, SYNAQ Cloud Mail experienced a minor service incident which caused mail delays for a subset of clients. The resultant impact of the event was that certain users experienced mail delays of up to 2 hours for some of their mail. Root Cause and Solution The root cause of this event was due to a failed controller on a backend storage device. As a result of the failure, all data accessing the storage had to failover to a single data path. This failover caused an abnormal level of usage over the single data path, resulting in increased latency times for data reads and writes, which in turn caused mail delivery to users mailboxes to be delayed. In order to resolve this issue, the controller was replaced and dual paths were restored. As a result, mail delivery performance was restored to the affected users. Remediation Actions • Whilst the built-in redundancy on the storage array worked to prevent a complete loss of access to mailbox data, SYNAQ engineers are working with our storage vendor to increase the single data path capacity to handle failover load without impacting users.
Status: Postmortem
Impact: Minor | Started At: June 10, 2019, 11:38 a.m.
Description: Summary and Impact to Customers On Monday 10th June from 1:38pm to Tuesday 11th June 4:55pm, SYNAQ Cloud Mail experienced a minor service incident which caused mail delays for a subset of clients. The resultant impact of the event was that certain users experienced mail delays of up to 2 hours for some of their mail. Root Cause and Solution The root cause of this event was due to a failed controller on a backend storage device. As a result of the failure, all data accessing the storage had to failover to a single data path. This failover caused an abnormal level of usage over the single data path, resulting in increased latency times for data reads and writes, which in turn caused mail delivery to users mailboxes to be delayed. In order to resolve this issue, the controller was replaced and dual paths were restored. As a result, mail delivery performance was restored to the affected users. Remediation Actions • Whilst the built-in redundancy on the storage array worked to prevent a complete loss of access to mailbox data, SYNAQ engineers are working with our storage vendor to increase the single data path capacity to handle failover load without impacting users.
Status: Postmortem
Impact: Minor | Started At: June 10, 2019, 11:38 a.m.
Description: Summary and Impact to Customers On Wednesday 5th June from 10:55am to 7:27pm, SYNAQ Branding experienced a major service incident which affected all elements of the email branding service. The resultant impact of the event was that a Branding Clients experienced malformed mails or mails with the body being removed and eventually no branding at all. Root Cause and Solution The root cause of this event was due to a code release to correct a line break format issue for mails destined for a certain type of ticket management system, where the branding was not showing correctly. We enforced line breaks on every mail to be a CRLF format type and these in turn caused issues with certain outbound MTA’s that were expecting the line breaks to be in an LF format. As a result of the above, formatting issues occurred with the message body of numerous mails going through the platform. In order to resolve this issue, the Branding service was disabled to allow mail to be delivered in its original format. Thereafter, the new code was rolled back to the previous stable version and Branding was then re-enabled for all clients and mail began to be correctly branded. Remediation Actions • Additional end-to-end testing methods were introduced to proactively identify this issue for any further releases.
Status: Postmortem
Impact: Critical | Started At: June 5, 2019, 1:07 p.m.
Description: Summary and Impact to Customers On Wednesday 5th June from 10:55am to 7:27pm, SYNAQ Branding experienced a major service incident which affected all elements of the email branding service. The resultant impact of the event was that a Branding Clients experienced malformed mails or mails with the body being removed and eventually no branding at all. Root Cause and Solution The root cause of this event was due to a code release to correct a line break format issue for mails destined for a certain type of ticket management system, where the branding was not showing correctly. We enforced line breaks on every mail to be a CRLF format type and these in turn caused issues with certain outbound MTA’s that were expecting the line breaks to be in an LF format. As a result of the above, formatting issues occurred with the message body of numerous mails going through the platform. In order to resolve this issue, the Branding service was disabled to allow mail to be delivered in its original format. Thereafter, the new code was rolled back to the previous stable version and Branding was then re-enabled for all clients and mail began to be correctly branded. Remediation Actions • Additional end-to-end testing methods were introduced to proactively identify this issue for any further releases.
Status: Postmortem
Impact: Critical | Started At: June 5, 2019, 1:07 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.