Last checked: 9 minutes ago
Get notified about any outages, downtime or incidents for miLibris and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for miLibris.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
Publication | Active |
Static | Active |
API | Active |
v3.x | Active |
v4.x | Active |
v5.x | Active |
Console | Active |
Console | Active |
Multimedia Editor | Active |
Push | Active |
Content Delivery Network | Active |
CA | Active |
FR | Active |
NL | Active |
US | Active |
Mindreader | Active |
Push Notifications | Active |
View the latest incidents for miLibris and check for official updates:
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: June 30, 2016, 8:29 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Major | Started At: June 23, 2016, 10:40 p.m.
Description: Due to sub-optimal data access requests.
Status: Resolved
Impact: None | Started At: June 16, 2016, 9:26 p.m.
Description: Sunday and monday morning, we had downtimes due to the poor response time of our main database. We want to share here what happened with full transparency. ### Problem Our main database servers was not able to execute the number of transactions per time frame that is actually required by the applications servers. We monitored a high resources utilization (CPU, Memory, ...) and an excessive locking of the database requests. At the time we write these lines, the root cause is not fully established and we are still investigating. A dedicated team is on site at 6.30am every day, until complete resolution. ### Impacted Services - APIs high response time (may cause timeouts on connected devices or apps) - Content delivery - Non cached thumbnails delivery - In some cases, publication process bad behavior (wrong catalog cache invalidation when a new content is ready) ### Resolution We first wanted to minimize the number of DB queries by increasing the query cache of our APIs. We spawn for that two more instances of our frontend servers. We then decide to also change our load balancer configuration to queue and bufferize more queries depending of the load of our frontends. These two actions solves the problem and the DB locks finally falled to a normal value. Time to repair = 4-5 hours. Each down time. ### Still running actions Traffic is constantly growing every day due to global increase of digital readings but not in any unexpected load, so this problem is not due to a "special load. We still have to work on this problem to understand what exactly happened. Plan of actions : - reduce the coupling between some parts of services and our main DB - increase caches or add more application cache - increase the number of DB probes to graph more values - working on our connexion spooler to better spread and fallback connections
Status: Postmortem
Impact: Minor | Started At: May 9, 2016, 6:58 a.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Major | Started At: May 8, 2016, 7:41 a.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.