Company Logo

Is there an Mezmo outage?

Mezmo status: Systems Active

Last checked: 7 minutes ago

Get notified about any outages, downtime or incidents for Mezmo and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

Mezmo outages and incidents

Outage and incident data over the last 30 days for Mezmo.

There have been 1 outages or incidents for Mezmo in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for Mezmo

Outlogger tracks the status of these components for Xero:

Alerting Active
Archiving Active
Livetail Active
Log Ingestion (Agent/REST API/Code Libraries) Active
Log Ingestion (Heroku) Active
Log Ingestion (Syslog) Active
Search Active
Web App Active
Destinations Active
Ingestion / Sources Active
Processors Active
Web App Active
Component Status
Active
Alerting Active
Archiving Active
Livetail Active
Log Ingestion (Agent/REST API/Code Libraries) Active
Log Ingestion (Heroku) Active
Log Ingestion (Syslog) Active
Search Active
Web App Active
Active
Destinations Active
Ingestion / Sources Active
Processors Active
Web App Active

Latest Mezmo outages and incidents.

View the latest incidents for Mezmo and check for official updates:

Updates:

  • Time: April 20, 2021, 11:54 a.m.
    Status: Resolved
    Update: The incident has been resolved and the logs are accessible in web app.
  • Time: April 20, 2021, 11:54 a.m.
    Status: Resolved
    Update: The incident has been resolved and the logs are accessible in web app.
  • Time: April 20, 2021, 11:12 a.m.
    Status: Monitoring
    Update: Web app is operational and we are continuing to monitor all services.
  • Time: April 20, 2021, 11:12 a.m.
    Status: Monitoring
    Update: Web app is operational and we are continuing to monitor all services.
  • Time: April 20, 2021, 10:08 a.m.
    Status: Monitoring
    Update: Our engineers applied a fix and we are monitoring the results.
  • Time: April 20, 2021, 10:08 a.m.
    Status: Monitoring
    Update: Our engineers applied a fix and we are monitoring the results.
  • Time: April 20, 2021, 9:04 a.m.
    Status: Investigating
    Update: We are continuing to investigate this issue.
  • Time: April 20, 2021, 9:04 a.m.
    Status: Investigating
    Update: Our WebUI is not loading pages consistently. We are currently investigating the issue.

Updates:

  • Time: March 29, 2021, 1:11 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: March 28, 2021, 6:48 p.m.
    Status: Identified
    Update: We have resumed all services and functionality. We are continuing to monitor the situation.
  • Time: March 26, 2021, 3:45 p.m.
    Status: Identified
    Update: As we continue working on this issue, we have temporarily scaled down live-tail and alerting. Some customers may not see live tail and receive new alerts. New logs still appear in the UI with delays.
  • Time: March 26, 2021, 10:33 a.m.
    Status: Identified
    Update: We have resumed live tail and alerting now. Some users may still experience delays with searching and archiving. We are continuing to monitor the situation.
  • Time: March 26, 2021, 5:01 a.m.
    Status: Identified
    Update: As we continue working on this issue, we have temporarily scaled down livetail and alerting. Some customers may not see live tail and receive new alerts. New logs still appear in the UI with delays.
  • Time: March 25, 2021, 8:43 p.m.
    Status: Identified
    Update: We are still experiencing some delays, new logs are continuing to be processed and our engineers are taking steps to mitigate the impact.
  • Time: March 25, 2021, 7:07 a.m.
    Status: Identified
    Update: We are still experiencing some delays, new logs are continuing to be processed. Investigations are ongoing.
  • Time: March 25, 2021, 1:54 a.m.
    Status: Identified
    Update: New logs are still appearing in the UI with delays for some customers. We are continuing to investigate.
  • Time: March 24, 2021, 11:44 p.m.
    Status: Identified
    Update: Delays are still being experienced by some customers. We continue to work towards a solution.
  • Time: March 24, 2021, 10:38 p.m.
    Status: Identified
    Update: For some customers, newly submitted logs are being ingested successfully but are only appearing in our UI after long delays. Our engineers have identified the cause and are working towards a solution.

Updates:

  • Time: March 29, 2021, 1:11 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: March 28, 2021, 6:48 p.m.
    Status: Identified
    Update: We have resumed all services and functionality. We are continuing to monitor the situation.
  • Time: March 26, 2021, 3:45 p.m.
    Status: Identified
    Update: As we continue working on this issue, we have temporarily scaled down live-tail and alerting. Some customers may not see live tail and receive new alerts. New logs still appear in the UI with delays.
  • Time: March 26, 2021, 10:33 a.m.
    Status: Identified
    Update: We have resumed live tail and alerting now. Some users may still experience delays with searching and archiving. We are continuing to monitor the situation.
  • Time: March 26, 2021, 5:01 a.m.
    Status: Identified
    Update: As we continue working on this issue, we have temporarily scaled down livetail and alerting. Some customers may not see live tail and receive new alerts. New logs still appear in the UI with delays.
  • Time: March 25, 2021, 8:43 p.m.
    Status: Identified
    Update: We are still experiencing some delays, new logs are continuing to be processed and our engineers are taking steps to mitigate the impact.
  • Time: March 25, 2021, 7:07 a.m.
    Status: Identified
    Update: We are still experiencing some delays, new logs are continuing to be processed. Investigations are ongoing.
  • Time: March 25, 2021, 1:54 a.m.
    Status: Identified
    Update: New logs are still appearing in the UI with delays for some customers. We are continuing to investigate.
  • Time: March 24, 2021, 11:44 p.m.
    Status: Identified
    Update: Delays are still being experienced by some customers. We continue to work towards a solution.
  • Time: March 24, 2021, 10:38 p.m.
    Status: Identified
    Update: For some customers, newly submitted logs are being ingested successfully but are only appearing in our UI after long delays. Our engineers have identified the cause and are working towards a solution.

Updates:

  • Time: March 25, 2021, 5:30 p.m.
    Status: Postmortem
    Update: **Dates:** Start Time: Thursday, March 4, 2021, at ~03:45 UTC End Time: Thursday, March 4, 2021, at ~08:20 UTC Duration: ~4:36:00 ‌ **What happened:** Our Web UI returned an error message "Request returned an error. Try again?" when users tried to perform a search query or use Live Tail in the Web UI. ‌ **Why it happened:** The pods that run our searching and Live Tail services were automatically terminated by our Kubernetes orchestration system. Upon investigation, we discovered we had inadvertently classed these services as low priority. The incident occurred when a large number of other services that were classed as higher priority needed to run to meet usage demands. The orchestration system automatically terminated the lower priority services to make resources available for the higher priority services. More specifically, these pods were put into a “terminating” state. Normally this state is temporary -- a transition between “running” and “terminated”. During this incident, the pods remained in the “terminating” state permanently. Our monitoring detects services that have been “terminated”, but not ones that are in the temporary “terminating” state. Consequently, our infrastructure team was not notified. ‌ **How we fixed it:** We increased the priority of the pods that run our searching and Live Tail services to match the priority of other services. We updated the configuration of our orchestration system to make the change permanent. ‌ **What we are doing to prevent it from happening again:** We’ve already updated the configuration of our orchestration system to give services the correct priority. These changes are permanent and should prevent similar problems in the future.
  • Time: March 25, 2021, 5:30 p.m.
    Status: Postmortem
    Update: **Dates:** Start Time: Thursday, March 4, 2021, at ~03:45 UTC End Time: Thursday, March 4, 2021, at ~08:20 UTC Duration: ~4:36:00 ‌ **What happened:** Our Web UI returned an error message "Request returned an error. Try again?" when users tried to perform a search query or use Live Tail in the Web UI. ‌ **Why it happened:** The pods that run our searching and Live Tail services were automatically terminated by our Kubernetes orchestration system. Upon investigation, we discovered we had inadvertently classed these services as low priority. The incident occurred when a large number of other services that were classed as higher priority needed to run to meet usage demands. The orchestration system automatically terminated the lower priority services to make resources available for the higher priority services. More specifically, these pods were put into a “terminating” state. Normally this state is temporary -- a transition between “running” and “terminated”. During this incident, the pods remained in the “terminating” state permanently. Our monitoring detects services that have been “terminated”, but not ones that are in the temporary “terminating” state. Consequently, our infrastructure team was not notified. ‌ **How we fixed it:** We increased the priority of the pods that run our searching and Live Tail services to match the priority of other services. We updated the configuration of our orchestration system to make the change permanent. ‌ **What we are doing to prevent it from happening again:** We’ve already updated the configuration of our orchestration system to give services the correct priority. These changes are permanent and should prevent similar problems in the future.
  • Time: March 4, 2021, 8:21 a.m.
    Status: Resolved
    Update: This incident has been resolved and logs are searchable in the web app. We'll continue to monitor all services.
  • Time: March 4, 2021, 8:21 a.m.
    Status: Resolved
    Update: This incident has been resolved and logs are searchable in the web app. We'll continue to monitor all services.
  • Time: March 4, 2021, 8:12 a.m.
    Status: Monitoring
    Update: A fix has been implemented and we are monitoring the results.
  • Time: March 4, 2021, 8:12 a.m.
    Status: Monitoring
    Update: A fix has been implemented and we are monitoring the results.
  • Time: March 4, 2021, 8 a.m.
    Status: Investigating
    Update: We are currently investigating an issue that is rendering our log viewer unavailable at this time.
  • Time: March 4, 2021, 8 a.m.
    Status: Investigating
    Update: We are currently investigating an issue that is rendering our log viewer unavailable at this time.

Updates:

  • Time: March 11, 2021, 7:18 p.m.
    Status: Postmortem
    Update: ## Dates: Start Time: Friday, February 26, 2021, at 06:43 UTC End Time: Friday, February 26, 2021, at 20:42 UTC ## What happened: The insertion of newly submitted logs stopped entirely for all accounts for about 3 hours. Logs were still available in Live Tail but not for searching, graphing, and timelines. The ingestion of logs from clients was not interrupted and no data was lost. For more than 95% of newly submitted logs, log processing returned to normal speeds within 3 hours. All logs submitted during the 3 hour pause were available again about 30 minutes later. For less than 5% of newly submitted logs, log processing returned to normal speeds gradually. Logs submitted during the 3 hour pause also gradually became available. This impact was limited to about 12% of accounts. The incident was closed when logs from all time periods for all accounts were entirely available. ## Why it happened: Our service ran out of a set of resources that manage pre-sharding on the clusters that store logs, an operation that ensures new logs are promptly inserted into the clusters. This happened because of several simultaneous changes to our infrastructure that didn’t account for the need for more resources, particularly on clusters with a relatively large number of shards relative to their overall storage capacity. The insertion of new logs slowed down and the backlog of unprocessed logs grew. Eventually, the portion of our service that processes new logs was unable to keep up with demand. ## How we fixed it: We restarted the portion of our service that processes newly submitted logs. During the recovery, we prioritized restoring logs submitted in the last day. 95% of accounts were fully recovered after 3.5 hours. ## What we are doing to prevent it from happening again: We’ve increased the scale of the set of resources that ensure logs are processed promptly by adding more servers for these resources to run upon. We’ve also added alerting for when these resources are reaching their limit.
  • Time: Feb. 26, 2021, 8:42 p.m.
    Status: Resolved
    Update: We resolved the issue and all services are operational.
  • Time: Feb. 26, 2021, 12:50 p.m.
    Status: Monitoring
    Update: We resolved the issue and the service has returned to normal. We are closely monitoring the environment at this time.
  • Time: Feb. 26, 2021, 9:36 a.m.
    Status: Identified
    Update: We are continuing to work on a fix for this issue.
  • Time: Feb. 26, 2021, 9:12 a.m.
    Status: Identified
    Update: We are continuing to work towards restoring the search of recently ingested logs. At this time Users will experience searching, boards and screens not returning results for recently ingested logs.
  • Time: Feb. 26, 2021, 7:08 a.m.
    Status: Identified
    Update: Customers may experience delays with newly ingested logs and searching. The issue has been identified and a fix is being implemented.
  • Time: Feb. 26, 2021, 6:43 a.m.
    Status: Investigating
    Update: We are currently investigating this issue.

Check the status of similar companies and alternatives to Mezmo

Hudl
Hudl

Systems Active

OutSystems
OutSystems

Systems Active

Postman
Postman

Systems Active

Mendix
Mendix

Systems Active

DigitalOcean
DigitalOcean

Issues Detected

Bandwidth
Bandwidth

Issues Detected

DataRobot
DataRobot

Systems Active

Grafana Cloud
Grafana Cloud

Systems Active

SmartBear Software
SmartBear Software

Systems Active

Test IO
Test IO

Systems Active

Copado Solutions
Copado Solutions

Systems Active

CircleCI
CircleCI

Systems Active

Frequently Asked Questions - Mezmo

Is there a Mezmo outage?
The current status of Mezmo is: Systems Active
Where can I find the official status page of Mezmo?
The official status page for Mezmo is here
How can I get notified if Mezmo is down or experiencing an outage?
To get notified of any status changes to Mezmo, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of Mezmo every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does Mezmo do?
Mezmo is a cloud-based tool that helps application owners manage and analyze important business data across different areas.