Last checked: a minute ago
Get notified about any outages, downtime or incidents for CometChat and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for CometChat.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
CometChat APIs | Active |
App Management API | Active |
Rest API (AU) | Active |
Rest API (EU) | Active |
Rest API (IN) | Active |
Rest API (US) | Active |
CometChat Frontends | Active |
Dashboard | Active |
Website | Active |
CometChat v2 | Active |
Client API (EU) | Active |
Client API (US) | Active |
WebRTC (EU) | Active |
WebRTC (US) | Active |
WebSockets (EU) | Active |
WebSockets (US) | Active |
CometChat v3 | Active |
Client API (EU) | Active |
Client API (IN) | Active |
Client API (US) | Active |
WebRTC (EU) | Active |
WebRTC (IN) | Active |
WebRTC (US) | Active |
WebSocket (IN) | Active |
WebSockets (EU) | Active |
WebSockets (US) | Active |
View the latest incidents for CometChat and check for official updates:
Description: Starting around 12:35pm MST on January 27, 2021, some customers started experiencing occasional errors and increased latency while using CometChat. Around 12:45pm MST there was a rapid increase in errors and CometChat wasn’t usable for most customers with apps hosted in our US region. Around 12:45pm MST, we began the process of migrating our customers to a separate database cluster. From there, some customers started seeing improvements. By 1:35pm MST, our migration was complete and all customers were able to use CometChat again. A root cause analysis revealed that, our backup policies coincided with an infrastructure issue that occurred at our cloud vendor's end. As a result, our I/O operations were suspended for an extended period of time. Internal monitoring tools at our cloud vendor's end were able to observe this behavior which eventually caused the underlying hosts to be replaced. While this operation was being performed, it caused a backlog of transactions which ultimately lead to the outage. Our current priority is working alongside our cloud vendor and putting safeguards in place to prevent similar problems from happening again. We're truly sorry for the disruption.
Status: Postmortem
Impact: Critical | Started At: Jan. 27, 2021, 7:35 p.m.
Description: On Thursday, 3:28AM MST, our engineers noticed a sharp increase in traffic on one of our API clusters in the EU region. This resulted in API downtime for customers on that shared cluster. At 3:34AM MST, this issue was resolved automatically by our self-healing architecture. We use an auto-scaling mechanism that instantiates new servers as the traffic increases. However, the configuration was not able to anticipate the rapid increase in traffic, and new servers were added at a rate slower than required to maintain uptime. As a result, customers on that shared cluster faced API downtime for a few minutes. We have now tweaked our auto-scaling mechanism to initiate much quicker to avoid such an issue in the future. We apologize for the inconvenience.
Status: Resolved
Impact: None | Started At: Dec. 10, 2020, 10:30 a.m.
Description: On Thursday, 3:28AM MST, our engineers noticed a sharp increase in traffic on one of our API clusters in the EU region. This resulted in API downtime for customers on that shared cluster. At 3:34AM MST, this issue was resolved automatically by our self-healing architecture. We use an auto-scaling mechanism that instantiates new servers as the traffic increases. However, the configuration was not able to anticipate the rapid increase in traffic, and new servers were added at a rate slower than required to maintain uptime. As a result, customers on that shared cluster faced API downtime for a few minutes. We have now tweaked our auto-scaling mechanism to initiate much quicker to avoid such an issue in the future. We apologize for the inconvenience.
Status: Resolved
Impact: None | Started At: Dec. 10, 2020, 10:30 a.m.
Description: On Thursday, 6:39AM MST, our engineers identified an unusual spike in traffic hitting one of our API clusters in the EU region. As a result, some of our databases were inundated with requests. This resulted in degraded performance for many of our customers on that shared cluster. At 7:06AM MST, this issue was resolved. In light of this event, we have added flood protection mechanisms to avoid such an issue in the future. We apologize for the inconvenience. If you are still facing this issue, please get in touch with us.
Status: Resolved
Impact: Major | Started At: Nov. 19, 2020, 1:30 p.m.
Description: On Thursday, 6:39AM MST, our engineers identified an unusual spike in traffic hitting one of our API clusters in the EU region. As a result, some of our databases were inundated with requests. This resulted in degraded performance for many of our customers on that shared cluster. At 7:06AM MST, this issue was resolved. In light of this event, we have added flood protection mechanisms to avoid such an issue in the future. We apologize for the inconvenience. If you are still facing this issue, please get in touch with us.
Status: Resolved
Impact: Major | Started At: Nov. 19, 2020, 1:30 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.