Last checked: 7 minutes ago
Get notified about any outages, downtime or incidents for Molecule and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for Molecule.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
Europe | Active |
Amazon Web Services - Databases | Active |
Amazon Web Services - DNS | Active |
Amazon Web Services - EC2 | Active |
Amazon Web Services - EKS | Active |
Amazon Web Services - File Storage | Active |
Auth0 User Authentication | Active |
Molecule Europe | Active |
US | Active |
Amazon Web Services - Databases | Active |
Amazon Web Services - DNS | Active |
Amazon Web Services - EC2 | Active |
Amazon Web Services - EKS | Active |
Auth0 User Authentication | Active |
Molecule | Active |
View the latest incidents for Molecule and check for official updates:
Description: We wanted to share our research and results related to this incident in late October. Thank you for your patience at that time--here’s what we found. #### The First Event On Tuesday, October 22, in the afternoon, Amazon Web Services \(AWS\) began experiencing a DDoS \(distributed denial of service\) attack. To be clear, this is not an attempted intrusion – it’s an attempt to slow down a provider by inundating it with traffic. Amazon has confirmed this in e-mails to customers, but has not posted about this incident, for some reason. AWS, our datacenter, largely stayed up during this incident, but experienced intermittent seconds of downtime. #### Knock-On Effects These seconds of downtime caused pieces of our infrastructure \(our Kubernetes masters, specifically\) to reboot. This is routine and shouldn’t cause major issues. However, due to a misconfiguration, these servers had conflicting information about how many servers they were each managing. This issue had not previously manifested, because these servers had not recently needed a reboot. As a result, two separate masters began spinning up and spinning down additional servers, thus causing occasional issues during the late evening. Increasing our minimum server count fixed the issue temporarily, and the root cause was found and fixed, that weekend. #### Knock-On Effects We learned that we should fully reboot our servers as part of our routine maintenance, to catch issues like this. We plan to do this going forward. We apologize for the intermittent warnings and slowness we experienced the week of October 22, and plan to avoid this type of incident in the future.
Status: Postmortem
Impact: None | Started At: Oct. 23, 2019, 3:17 a.m.
Description: We wanted to share our research and results related to this incident in late October. Thank you for your patience at that time--here’s what we found. #### The First Event On Tuesday, October 22, in the afternoon, Amazon Web Services \(AWS\) began experiencing a DDoS \(distributed denial of service\) attack. To be clear, this is not an attempted intrusion – it’s an attempt to slow down a provider by inundating it with traffic. Amazon has confirmed this in e-mails to customers, but has not posted about this incident, for some reason. AWS, our datacenter, largely stayed up during this incident, but experienced intermittent seconds of downtime. #### Knock-On Effects These seconds of downtime caused pieces of our infrastructure \(our Kubernetes masters, specifically\) to reboot. This is routine and shouldn’t cause major issues. However, due to a misconfiguration, these servers had conflicting information about how many servers they were each managing. This issue had not previously manifested, because these servers had not recently needed a reboot. As a result, two separate masters began spinning up and spinning down additional servers, thus causing occasional issues during the late evening. Increasing our minimum server count fixed the issue temporarily, and the root cause was found and fixed, that weekend. #### Knock-On Effects We learned that we should fully reboot our servers as part of our routine maintenance, to catch issues like this. We plan to do this going forward. We apologize for the intermittent warnings and slowness we experienced the week of October 22, and plan to avoid this type of incident in the future.
Status: Postmortem
Impact: None | Started At: Oct. 23, 2019, 3:17 a.m.
Description: Amazon registered a DNS fault, and the Molecule application was unavailable for approximately 7 minutes.
Status: Resolved
Impact: None | Started At: Oct. 22, 2019, 9:36 p.m.
Description: Amazon registered a DNS fault, and the Molecule application was unavailable for approximately 7 minutes.
Status: Resolved
Impact: None | Started At: Oct. 22, 2019, 9:36 p.m.
Description: As a result of an ICE outage, there is a possibility that trades were not captured in Molecule for a brief period of time this morning. We have reset all connections with ICE and have requested trade history. All deals should currently be captured in Molecule. Please contact the customer success team at [email protected] if you find any missing trades.
Status: Resolved
Impact: None | Started At: Oct. 14, 2019, 6:05 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.