Last checked: 4 minutes ago
Get notified about any outages, downtime or incidents for Callr and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Callr.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
View the latest incidents for Callr and check for official updates:
Description: The DID store is now available, a fix has been implemented.
Status: Resolved
Impact: Minor | Started At: Jan. 29, 2020, 9:42 a.m.
Description: This incident has been resolved. Routing to all carriers is working flawlessly. We apologize for any inconvenience.
Status: Resolved
Impact: Minor | Started At: Jan. 15, 2020, 9:41 a.m.
Description: A new spare device is ready to use in case of another hardware failure. We are now marking this incident as resolved.
Status: Resolved
Impact: Critical | Started At: Dec. 18, 2019, 10:53 a.m.
Description: The datacenter EQUINIX PA2 had a power incident when performing a maintenance on its UPS systems. They said "This activity is fully scripted and designed to be transparent to your operations.", obviously this was not the case. They had an issue with one of the UPS system that caused a power surge, which impacted some cages. The power incident had an impact on most of our equipment in this datacenter. However, because most of the platform is redundant and highly available, the impact was minimal.
Status: Resolved
Impact: Minor | Started At: Nov. 20, 2019, 9:58 p.m.
Description: We had a VMware crash (at the hypervisor level) on our main Redis server. The log says: An application (/bin/vmx) running on ESXi host has crashed (1 time(s) so far). A core file might have been created at /vmfs/volumes/579f2eb5-e0d5763e-df1c-f48e38-c36596/pa2-rediscore01/vmx-zdump.000. The VM was powered off automatically by VMware after the crash. Our monitoring system detected the issue immediately. In an event like this, we can either deploy a new configuration to use a backup redis instance (we have 4), or we can wait for the main instance to come back online. Since the origin of the outage was detected very quickly, we were able to restart the VM and chose not to switch to a backup instance. The delay would have been more or less the same. After the VM was started again, the outage was over. We are sorry about this issue. We will analyze the logs and try to prevent any further potential issues.
Status: Resolved
Impact: Critical | Started At: Oct. 18, 2019, 1:20 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.