Company Logo

Is there an Kustomer outage?

Kustomer status: Systems Active

Last checked: 8 minutes ago

Get notified about any outages, downtime or incidents for Kustomer and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

Kustomer outages and incidents

Outage and incident data over the last 30 days for Kustomer.

There have been 3 outages or incidents for Kustomer in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for Kustomer

Outlogger tracks the status of these components for Xero:

Regional Incident Active
Analytics Active
API Active
Bulk Jobs Active
Channel - Chat Active
Channel - Email Active
Channel - Facebook Active
Channel - Instagram Active
Channel - SMS Active
Channel - Twitter Active
Channel - WhatsApp Active
CSAT Active
Events / Audit Log Active
Exports Active
Knowledge base Active
Kustomer Voice Active
Notifications Active
Registration Active
Search Active
Tracking Active
Web Client Active
Web/Email/Form Hooks Active
Workflow Active
Analytics Active
API Active
Bulk Jobs Active
Channel - Chat Active
Channel - Email Active
Channel - Facebook Active
Channel - Instagram Active
Channel - SMS Active
Channel - Twitter Active
Channel - WhatsApp Active
CSAT Active
Events / Audit Log Active
Exports Active
Knowledge base Active
Kustomer Voice Active
Notifications Active
Registration Active
Search Active
Tracking Active
Web Client Active
Web/Email/Form Hooks Active
Workflow Active
OpenAI Active
PubNub Active
Component Status
Regional Incident Active
Active
Analytics Active
API Active
Bulk Jobs Active
Channel - Chat Active
Channel - Email Active
Channel - Facebook Active
Channel - Instagram Active
Channel - SMS Active
Channel - Twitter Active
Channel - WhatsApp Active
CSAT Active
Events / Audit Log Active
Exports Active
Knowledge base Active
Kustomer Voice Active
Notifications Active
Registration Active
Search Active
Tracking Active
Web Client Active
Web/Email/Form Hooks Active
Workflow Active
Active
Analytics Active
API Active
Bulk Jobs Active
Channel - Chat Active
Channel - Email Active
Channel - Facebook Active
Channel - Instagram Active
Channel - SMS Active
Channel - Twitter Active
Channel - WhatsApp Active
CSAT Active
Events / Audit Log Active
Exports Active
Knowledge base Active
Kustomer Voice Active
Notifications Active
Registration Active
Search Active
Tracking Active
Web Client Active
Web/Email/Form Hooks Active
Workflow Active
Active
OpenAI Active
PubNub Active

Latest Kustomer outages and incidents.

View the latest incidents for Kustomer and check for official updates:

Updates:

  • Time: Sept. 13, 2024, 5:23 p.m.
    Status: Resolved
    Update: The issue regarding Gmail conversations across ALL PODS has been resolved. After careful monitoring, our team has determined that all affected areas are now fully restored. Please reach out to Kustomer support at Email or Chat if you have additional questions or concerns.
  • Time: Sept. 13, 2024, 4:05 p.m.
    Status: Monitoring
    Update: Kustomer is aware of an event reported by one of our third party vendors affecting Gmail conversations across ALL PODS that may cause latency when sending and receiving Gmail emails within the platform. Our team is monitoring the incident, and working with the vendor where possible to resolve the issue. Please expect further updates within the next 3 hours, and reach out to Kustomer support through Chat if you have additional questions or concerns.

Updates:

  • Time: Sept. 26, 2024, 2:19 p.m.
    Status: Postmortem
    Update: # **Summary** On September 12, 2024 customers on our Prod 1 cluster experienced elevated latency  on multiple features of the Kustomer product. **Root Cause** We had an error during a Sobjects service deployment which consumed all the available hardware resources and blocked the event processor to scale in response to increased load. This caused a slowdown in processing the events which led to the latency in sending responses. Our standard auto recovery attempts failed so our engineers had to manually fix the issue.  # **Timeline** **Sep 12, 2024** * 2:44 PM EDT Our on-call engineers were alerted to an incident of increased latency in the platform, kicking off an investigation * 2:48 PM EDT Kustomer’s support team began receiving reports of high latency across the platform * 3:30 PM EDT The issue was identified in our event processor where it hit a limit during scale out. The oncall engineer manually increased this limit to quickly restore operations * 4:08 PM EDT Latency metrics reached normal levels. * 5:21 PM EDT All delayed events are processed and deadlettered items were fully redriven **Lessons/Improvements** * **Improve monitoring and alerting** - Our team was alerted of the failures and began investigating immediately, but did not have immediate visibility into the cause of the failures.  We’ve begun improving our monitoring  to allow for quicker response times in the case of a future failure like this.  -  * _\[DONE\]We have fixed the observability issue with the Observability tool , Which would help to investigate such issues faster in the future._ * **Investigate mitigation techniques** - Although with improved monitoring we would be able to respond and resolve this issue in the future, ideally we want to mitigate the chances of this happening. We’ve already begun researching ways on how we can reduce the chance of recurrence. -  * _\[DONE\] Optimize release schedule and cadence._  * _\[IN PROGRESS\] Investigate memory and CPU limits on the event handling service"_
  • Time: Sept. 12, 2024, 9:21 p.m.
    Status: Resolved
    Update: Kustomer has resolved an event affecting all conversations in POD 1 that caused Message objects to not send, received or updated. After careful monitoring, our team has determined that all affected areas are now fully restored. Please reach out to Kustomer support via Chat or Email if you have additional questions or concerns.
  • Time: Sept. 12, 2024, 8:16 p.m.
    Status: Monitoring
    Update: Kustomer has implemented an update to address an event affecting Message sending in Chat and Email conversations in POD1 that caused lag to Message objects being sent, received and updated in Channels. Our team is currently monitoring this update to ensure the issue is fully resolved. Please expect further updates within the next 30 minutes, and reach out to Kustomer support in Chat or Email if you have additional questions or concerns.
  • Time: Sept. 12, 2024, 8:13 p.m.
    Status: Monitoring
    Update: Kustomer has implemented an update to address an event affecting Message sending in Chat and Email conversations in POD1 that caused lag to Message objects being sent, received and updated in Channels. Our team is currently monitoring this update to ensure the issue is fully resolved. Please expect further updates within the next 30 minutes, and reach out to Kustomer support in Chat or Email if you have additional questions or concerns.
  • Time: Sept. 12, 2024, 7:23 p.m.
    Status: Investigating
    Update: Kustomer is aware of an event affecting Chat AND Email conversations that may cause severe lag within the platform when sending and receiving messages to where there is long delay or no delivery. Our team is currently working to identify the cause of this issue in an effort to implement a resolution. Please expect additional updates within the next 30 minutes.

Updates:

  • Time: Aug. 9, 2024, 11:39 p.m.
    Status: Postmortem
    Update: # **Incident Post Mortem \[8/8/2024\]**  ‌ ## **Summary** ‌ The Search service became unavailable for orgs within the same shard as the client, Promise, due to too many long lasting search queries that backed up the service for other orgs attempting to make requests. Clients were unable to make searches for some time, but after identifying the suspect client, services were returned to and operational for all but one client by EOD. During the following day, a solution was implemented to decrease the suspected client’s search queries by 88% leading to restored access and operations with the Search service to all clients. ‌ **Incident Owner:** Jacob Hansen **Jira Link:** [https://kustomer.atlassian.net/browse/KDEV-65642](https://kustomer.atlassian.net/browse/KDEV-65642) **Datadog Incident / Slack:** [https://kustomer.slack.com/archives/C07FYE0T691](https://kustomer.slack.com/archives/C07FYE0T691)  ‌ ## **What happened** ‌ * Clients were not able to access / interact with the Search service * Kill switch applied to a handful of clients to restore Search service utilization metrics * Search service was restored * Investigating resulted in finding a suspect client that brought the Search service down * Manual testing with the suspect client’s queries proved a viable solution for the client * A solution was implemented to improve the suspected client’s query times and open the service to all clients again ‌ ## **Timeline of Events** ‌ **08/07/2024** ‌ 11:53 Sentry alerts with the given message: "API failure: Internal Server Error".Sentry sends alert to the #on-call slack channel and the on-call primary begins investigating the history of this alert to gain context and inquires with another dev on how this alert was previously handled. \(The other dev was on PTO, so no response\). References: [https://kustomer.slack.com/archives/C0561J88DFD/p1723049597710699?thread\_ts=1721765840.363319&cid=C0561J88DFD](https://kustomer.slack.com/archives/C0561J88DFD/p1723049597710699?thread_ts=1721765840.363319&cid=C0561J88DFD)  [https://kustomer.slack.com/archives/C0561J88DFD/p1723049599090139?thread\_ts=1721765846.724659&cid=C0561J88DFD](https://kustomer.slack.com/archives/C0561J88DFD/p1723049599090139?thread_ts=1721765846.724659&cid=C0561J88DFD)  ‌ ‌ 12:17 CX member sends message to CX team about client having issues with Search. Reference: [https://kustomer.slack.com/archives/C4S5QJ668/p1723051033909989](https://kustomer.slack.com/archives/C4S5QJ668/p1723051033909989) 12:30 On-call Primary reaches out to the Dev department for help analyzing Sentry alerts. On-call primary sends a message to the slack dev department to get help interpreting the multiple Sentry alerts, which several devs then join in to help triage the issue. Reference: [https://kustomer.slack.com/archives/C0555C1R7RB/p1723051830049299](https://kustomer.slack.com/archives/C0555C1R7RB/p1723051830049299) 12:42 Dev finds that multiple search cluster nodes have high CPU and memory utilizations. Reference: [https://kustomer.slack.com/archives/C0555C1R7RB/p1723052530844589?thread\_ts=1723051830.049299&cid=C0555C1R7RB](https://kustomer.slack.com/archives/C0555C1R7RB/p1723052530844589?thread_ts=1723051830.049299&cid=C0555C1R7RB) 12:50 Incident channel created13:17 Dev proposes preventing suspected clients from bringing down service with expensive searches. A dev added suspected clients' details to a black list to prevent them from making further requests to the search service. References: [https://kustomer.slack.com/archives/C07FYE0T691/p1723053529574429](https://kustomer.slack.com/archives/C07FYE0T691/p1723053529574429)  [https://github.com/kustomer/customer-search/pull/814](https://github.com/kustomer/customer-search/pull/814)  ‌ 13:25 Dev keeps track of all impacted clients according to Sentry alerts. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723055122955879](https://kustomer.slack.com/archives/C07FYE0T691/p1723055122955879) 13:49 Dev applies org wide block to Search service for suspected client. Previous block efforts showed no improvement, so block scope was increased. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723056541587549](https://kustomer.slack.com/archives/C07FYE0T691/p1723056541587549) 13:58 Apply block to Search service for multiple clients to help identify problem sources. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723057086107159](https://kustomer.slack.com/archives/C07FYE0T691/p1723057086107159) 14:22 Status Page Created Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723058541574799](https://kustomer.slack.com/archives/C07FYE0T691/p1723058541574799)  ‌ 14:43 Improvements are noted as node CPU and memory utilization levels recover. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723059808276589](https://kustomer.slack.com/archives/C07FYE0T691/p1723059808276589)  ‌ 15:25 Problematic source / client identified and services returned to all other clients. References: [https://kustomer.slack.com/archives/C07FYE0T691/p1723062321187109](https://kustomer.slack.com/archives/C07FYE0T691/p1723062321187109)  [https://kustomer.slack.com/archives/C07FYE0T691/p1723060203276869](https://kustomer.slack.com/archives/C07FYE0T691/p1723060203276869)  ‌ 17:30 Suspect client slow searches are identified. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723069834440109](https://kustomer.slack.com/archives/C07FYE0T691/p1723069834440109)  ‌ 17:39 Proposed solution to client's slow searches is made. Dev manually verifies that adding a timeframe to the client's searches improved the request time by 88.46%. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723070382983199](https://kustomer.slack.com/archives/C07FYE0T691/p1723070382983199)  ‌ 17:45 EOD decision is made to keep suspect client blocked on Search service overnight and proposed solution is shared with the client. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723070724565159](https://kustomer.slack.com/archives/C07FYE0T691/p1723070724565159)  ‌ **08/08/2024** ‌ 08:45 Dev makes updates to the most problematic searches and restores search functionality without aggregation functionality to affected client's instance. ‌ 10:23 Dev finishes adding additional fixes to client's searches and restores functionality of the aggregations in search to the client. Reference: [https://kustomer.slack.com/archives/C07FYE0T691/p1723130614738959?thread\_ts=1723130568.437829&cid=C07FYE0T691](https://kustomer.slack.com/archives/C07FYE0T691/p1723130614738959?thread_ts=1723130568.437829&cid=C07FYE0T691)  ‌ 12:05 Incident marked resolved ‌ ## **Impact** ‌ Orgs on the same shard as the client, Promise, would have experienced the API error and or partial errors. ‌ ## **Technical details** ‌ This seemed like a snowball effect where a single client caused latency with expensive search queries, which caused several other clients to attribute to the overwhelmed search service. After this was discovered, the solution was implemented effectively. ‌ ## **Incident Review** ‌ ### **Went Well** * Sentry alerts first brought attention to errors and showed impacted clients. * Responsive team quickly jumped on the issue. ### **Potential for Improvement** * Extremely difficult to identify a list of orgs on a node. * We are too reliant on ELK and are not ready to transfer to CLX given a lack of dashboards and familiarity with creating custom queries. * Deploying changes to the kill switch took too long ### **Other Notes** * PagerDuty PM: [https://kustomer.pagerduty.com/postmortems/17e9cfc6-445b-9807-c0a5-3b9e8da5dd45](https://kustomer.pagerduty.com/postmortems/17e9cfc6-445b-9807-c0a5-3b9e8da5dd45) ‌ ## **Action Items** ‌ * Review icebox proposals to improve Search service * Increase visibility in this service. * Consider creating a scheduled job that shows statistics about orgs i.e table below. * [https://docs.google.com/spreadsheets/d/1Z0r\_Ho4zb0WrRao-ZOvQf4e\_cNX0FDSpTu8RTNIx57g/edit?gid=0#gid=0](https://docs.google.com/spreadsheets/d/1Z0r_Ho4zb0WrRao-ZOvQf4e_cNX0FDSpTu8RTNIx57g/edit?gid=0#gid=0) * Determine how to prevent such an issue from happening again programmatically. * Improve the kill switch via changing location \(to system search\) of kill switch so that deployments are faster. * Create a run book for this type of issue. * Transfer saved queries and dashboards in ELK to CLX .
  • Time: Aug. 7, 2024, 10:53 p.m.
    Status: Resolved
    Update: Kustomer has resolved an event affecting Searches that caused searches to be unavailable within the platform for a small number of orgs. To resolve this issue, our team has taken action to reduce CPU usage. After careful monitoring, our team has determined that all affected areas are now fully restored. Please reach out to Kustomer support at [email protected] if you have additional questions or concerns.
  • Time: Aug. 7, 2024, 8:30 p.m.
    Status: Monitoring
    Update: Kustomer has resolved an event affecting Searches that caused searches to be unavailable within the platform for a small number of orgs. To resolve this issue, our team has taken action to reduce CPU usage. After careful monitoring, our team has determined that all affected areas are now fully restored for 99% of our customers. Please reach out to Kustomer support at [email protected] if you have additional questions or concerns.
  • Time: Aug. 7, 2024, 8:08 p.m.
    Status: Monitoring
    Update: Kustomer has resolved an event affecting Searches that caused searches to be unavailable within the platform for a small number of orgs. To resolve this issue, our team has taken action to reduce CPU usage. After careful monitoring, our team has determined that all affected areas are now fully restored for the majority of our customers, with the exception of a small number of orgs. Please reach out to Kustomer support at [email protected] if you have additional questions or concerns.
  • Time: Aug. 7, 2024, 7:32 p.m.
    Status: Investigating
    Update: Kustomer is aware of an event affecting Searches that may cause searches to be unavailable within the platform for some orgs. Our team is currently working to identify the cause of this issue in an effort to implement a resolution. Please expect additional updates within the next 30 minutes, and reach out to Kustomer support at [email protected] if you have additional questions or concerns.

Updates:

  • Time: July 22, 2024, 3:17 p.m.
    Status: Resolved
    Update: Kustomer has resolved an event affecting core platform systems that caused latency in searches and timelines. After careful monitoring, our team has concluded that all affected areas are now fully restored. Please reach out to Kustomer support at [email protected] with any additional questions or concerns.
  • Time: July 22, 2024, 3:05 p.m.
    Status: Monitoring
    Update: We are continuing to monitor for any further issues.
  • Time: July 22, 2024, 3:01 p.m.
    Status: Monitoring
    Update: Kustomer has identified of an event affecting platform system that may cause customer and conversation timelines within the platform to experience trouble loading. Our team has implemented an update to address this issue and is monitoring to verify that it is resolved. Please expect additional updates within the next 30 minutes, and reach out to Kustomer support at [email protected] if you have additional questions or concerns.

Updates:

  • Time: July 22, 2024, 3:17 p.m.
    Status: Resolved
    Update: Kustomer has resolved an event affecting core platform systems that caused latency in searches and timelines. After careful monitoring, our team has concluded that all affected areas are now fully restored. Please reach out to Kustomer support at [email protected] with any additional questions or concerns.
  • Time: July 22, 2024, 3:05 p.m.
    Status: Monitoring
    Update: We are continuing to monitor for any further issues.
  • Time: July 22, 2024, 3:01 p.m.
    Status: Monitoring
    Update: Kustomer has identified of an event affecting platform system that may cause customer and conversation timelines within the platform to experience trouble loading. Our team has implemented an update to address this issue and is monitoring to verify that it is resolved. Please expect additional updates within the next 30 minutes, and reach out to Kustomer support at [email protected] if you have additional questions or concerns.

Check the status of similar companies and alternatives to Kustomer

Qualtrics
Qualtrics

Systems Active

Talkdesk
Talkdesk

Systems Active

Braze
Braze

Systems Active

Pendo
Pendo

Systems Active

Demandbase

Systems Active

Branch

Systems Active

Movable Ink
Movable Ink

Systems Active

Enable
Enable

Systems Active

ClickFunnels Classic
ClickFunnels Classic

Systems Active

Kajabi
Kajabi

Systems Active

Chili Piper
Chili Piper

Systems Active

iAdvize (HA)
iAdvize (HA)

Systems Active

Frequently Asked Questions - Kustomer

Is there a Kustomer outage?
The current status of Kustomer is: Systems Active
Where can I find the official status page of Kustomer?
The official status page for Kustomer is here
How can I get notified if Kustomer is down or experiencing an outage?
To get notified of any status changes to Kustomer, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of Kustomer every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does Kustomer do?
Kustomer offers omnichannel messaging, a unified customer view, and AI-powered automations to enhance customer experiences.