Last checked: 8 minutes ago
Get notified about any outages, downtime or incidents for Firstup and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Firstup.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
3rd-Party Dependencies | Active |
Identity Access Management | Active |
Image Transformation API | Active |
SendGrid API v3 | Active |
Zoom Zoom Virtual Agent | Active |
Ecosystem | Active |
Connect | Active |
Integrations | Active |
Partner API | Active |
User Sync | Active |
Platforms | Active |
EU Firstup Platform | Active |
US Firstup Platform | Active |
Products | Active |
Classic Studio | Active |
Creator Studio | Active |
Insights | Active |
Microapps | Active |
Mobile Experience | Active |
Web Experience | Active |
View the latest incidents for Firstup and check for official updates:
Description: A hotfix has deployed which has been verified to correct the underlying behavior of the author alias reverting on campaign drafts. Marking all components fully operational and resolving incident.
Status: Resolved
Impact: None | Started At: Aug. 7, 2024, 4:31 p.m.
Description: Queued campaigns are now successfully being delivered and we are currently monitoring the issue.
Status: Monitoring
Impact: None | Started At: July 23, 2024, 11:42 a.m.
Description: **Summary:** On July 10th, 2024, beginning at approximately 1:30 PM ET \(17:30 UTC\), we started receiving reports of published email campaigns that hadn’t been delivered to the intended audiences in over 1 hour. Due to the number of reports received, a platform incident was declared at 2:18 PM ET \(18:18 UTC\) and an incident response team began investigating these reports. Another platform incident was declared on July 11th, 2024, after reports of audiences being inaccessible or taking too long to load were received. **Impact:** The service degradation was intermittent in nature, and the impact was restricted to the US platform for access to some audiences and some campaigns published on July 10th, 2024, at or after 11:20 AM ET \(15:20 UTC\) through July 15th, 2024, at 6:23 PM ET \(22:23 UTC\). **Root Cause:** Both incidents stemmed from an overload of the ElasticSearch service, which resolves Audiences to User IDs and email addresses. A surge in error messages temporarily stored in a queue \(for messages ElasticSearch couldn't process\) overwhelmed the service, causing it to intermittently stop serving requests until it could catch up. **Mitigation:** The issue was immediately addressed by reducing the number of workers sending requests to ElasticSearch and increasing the number of nodes processing those requests. This reduced the strain on ElasticSearch, allowing the request queue to clear faster. Additionally, the error messages were manually reprocessed, making audiences accessible and campaigns publishable again. **Recurrence Prevention:** Errors in the queue are normal and typically resolve through automatic reprocessing. However, to prevent future occurrences: * We doubled ElasticSearch’s processing power on July 15th, 2024, at 6:23 PM ET to better handle any spikes. * We enabled additional monitoring and dashboards for early detection and mitigation of potential issues. * We will investigate and address the sources of the errors to ensure a healthier service.
Status: Postmortem
Impact: None | Started At: July 11, 2024, 7:07 p.m.
Description: **Summary:** On July 10th, 2024, beginning at approximately 1:30 PM ET \(17:30 UTC\), we started receiving reports of published email campaigns that hadn’t been delivered to the intended audiences in over 1 hour. Due to the number of reports received, a platform incident was declared at 2:18 PM ET \(18:18 UTC\) and an incident response team began investigating these reports. Another platform incident was declared on July 11th, 2024, after reports of audiences being inaccessible or taking too long to load were received. **Impact:** The service degradation was intermittent in nature, and the impact was restricted to the US platform for access to some audiences and some campaigns published on July 10th, 2024, at or after 11:20 AM ET \(15:20 UTC\) through July 15th, 2024, at 6:23 PM ET \(22:23 UTC\). **Root Cause:** Both incidents stemmed from an overload of the ElasticSearch service, which resolves Audiences to User IDs and email addresses. A surge in error messages temporarily stored in a queue \(for messages ElasticSearch couldn't process\) overwhelmed the service, causing it to intermittently stop serving requests until it could catch up. **Mitigation:** The issue was immediately addressed by reducing the number of workers sending requests to ElasticSearch and increasing the number of nodes processing those requests. This reduced the strain on ElasticSearch, allowing the request queue to clear faster. Additionally, the error messages were manually reprocessed, making audiences accessible and campaigns publishable again. **Recurrence Prevention:** Errors in the queue are normal and typically resolve through automatic reprocessing. However, to prevent future occurrences: * We doubled ElasticSearch’s processing power on July 15th, 2024, at 6:23 PM ET to better handle any spikes. * We enabled additional monitoring and dashboards for early detection and mitigation of potential issues. * We will investigate and address the sources of the errors to ensure a healthier service.
Status: Postmortem
Impact: None | Started At: July 10, 2024, 6:24 p.m.
Description: **Summary:** On July 10th, 2024, beginning at approximately 1:30 PM ET \(17:30 UTC\), we started receiving reports of published email campaigns that hadn’t been delivered to the intended audiences in over 1 hour. Due to the number of reports received, a platform incident was declared at 2:18 PM ET \(18:18 UTC\) and an incident response team began investigating these reports. Another platform incident was declared on July 11th, 2024, after reports of audiences being inaccessible or taking too long to load were received. **Impact:** The service degradation was intermittent in nature, and the impact was restricted to the US platform for access to some audiences and some campaigns published on July 10th, 2024, at or after 11:20 AM ET \(15:20 UTC\) through July 15th, 2024, at 6:23 PM ET \(22:23 UTC\). **Root Cause:** Both incidents stemmed from an overload of the ElasticSearch service, which resolves Audiences to User IDs and email addresses. A surge in error messages temporarily stored in a queue \(for messages ElasticSearch couldn't process\) overwhelmed the service, causing it to intermittently stop serving requests until it could catch up. **Mitigation:** The issue was immediately addressed by reducing the number of workers sending requests to ElasticSearch and increasing the number of nodes processing those requests. This reduced the strain on ElasticSearch, allowing the request queue to clear faster. Additionally, the error messages were manually reprocessed, making audiences accessible and campaigns publishable again. **Recurrence Prevention:** Errors in the queue are normal and typically resolve through automatic reprocessing. However, to prevent future occurrences: * We doubled ElasticSearch’s processing power on July 15th, 2024, at 6:23 PM ET to better handle any spikes. * We enabled additional monitoring and dashboards for early detection and mitigation of potential issues. * We will investigate and address the sources of the errors to ensure a healthier service.
Status: Postmortem
Impact: None | Started At: July 10, 2024, 6:24 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.