Last checked: 8 minutes ago
Get notified about any outages, downtime or incidents for imgix and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for imgix.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
API Service | Active |
Docs | Active |
Purging | Active |
Rendering Infrastructure | Active |
Sandbox | Active |
Stripe API | Active |
Web Administration Tools | Active |
Content Delivery Network | Active |
Amsterdam (AMS) | Active |
Ashburn (BWI) | Active |
Ashburn (DCA) | Active |
Ashburn (IAD) | Active |
Atlanta (ATL) | Active |
Atlanta (FTY) | Active |
Atlanta (PDK) | Active |
Auckland (AKL) | Active |
Boston (BOS) | Active |
Brisbane (BNE) | Active |
Buenos Aires (EZE) | Active |
Cape Town (CPT) | Active |
Chennai (MAA) | Active |
Chicago (CHI) | Active |
Chicago (MDW) | Active |
Chicago (ORD) | Active |
Columbus (CMH) | Active |
Content Delivery Network | Active |
Copenhagen (CPH) | Active |
Curitiba (CWB) | Active |
Dallas (DAL) | Active |
Dallas (DFW) | Active |
Denver (DEN) | Active |
Dubai (FJR) | Active |
Frankfurt (FRA) | Active |
Frankfurt (HHN) | Active |
Helsinki (HEL) | Active |
Hong Kong (HKG) | Active |
Houston (IAH) | Active |
Johannesburg (JNB) | Active |
London (LCY) | Active |
London (LHR) | Active |
Los Angeles (BUR) | Active |
Los Angeles (LAX) | Active |
Madrid (MAD) | Active |
Melbourne (MEL) | Active |
Miami (MIA) | Active |
Milan (MXP) | Active |
Minneapolis (MSP) | Active |
Montreal (YUL) | Active |
Mumbai (BOM) | Active |
Newark (EWR) | Active |
New York (JFK) | Active |
New York (LGA) | Active |
Osaka (ITM) | Active |
Palo Alto (PAO) | Active |
Paris (CDG) | Active |
Perth (PER) | Active |
Rio de Janeiro (GIG) | Active |
San Jose (SJC) | Active |
Santiago (SCL) | Active |
Sāo Paulo (GRU) | Active |
Seattle (SEA) | Active |
Singapore (SIN) | Active |
Stockholm (BMA) | Active |
Sydney (SYD) | Active |
Tokyo (HND) | Active |
Tokyo (NRT) | Active |
Tokyo (TYO) | Active |
Toronto (YYZ) | Active |
Vancouver (YVR) | Active |
Wellington (WLG) | Active |
DNS | Active |
imgix DNS Network | Active |
NS1 Global DNS Network | Active |
Docs | Active |
Netlify Content Distribution Network | Active |
Netlify Origin Servers | Active |
Storage Backends | Active |
Google Cloud Storage | Active |
s3-ap-northeast-1 | Active |
s3-ap-northeast-2 | Active |
s3-ap-southeast-1 | Active |
s3-ap-southeast-2 | Active |
s3-ca-central-1 | Active |
s3-eu-central-1 | Active |
s3-eu-west-1 | Active |
s3-eu-west-2 | Active |
s3-eu-west-3 | Active |
s3-sa-east-1 | Active |
s3-us-east-2 | Active |
s3-us-standard | Active |
s3-us-west-1 | Active |
s3-us-west-2 | Active |
View the latest incidents for imgix and check for official updates:
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: May 18, 2023, 3:52 p.m.
Description: # What happened On May 1st, 2023, between the hours of 08:23 UTC and 15:08 UTC, imgix experienced intermittent errors affecting a small percentage of non-cached renders. # How were customers impacted? During the affected period, a small percentage of requests to the Rendering API returned a `502` or `503` error for non-cached requests. Errors slowly and gradually increased, with <.5% of requests returning an error at the height of the incident. # What went wrong during the incident? Our upstream provider experienced communication issues between CDN POPs, causing intermittent `502`/`503` responses in a small percentage of requests to our Rendering API. The increase in errors was so minor that it did not meet our monitoring thresholds for triggering alerts. One of our engineers observed a slow increase in errors and alerted other team members to a potential issue with our service. After tracing the issue to our upstream provider, we pushed a patch to mitigate intermittent connectivity issues, resolving the incident. # What will imgix do to prevent this in the future? We have refined our alerting to better catch the slowly increasing error rates. We have also ensured that the root cause of this incident has been fixed by our upstream provider. We are also updating our traffic routing in the case that the upstream issue occurs again.
Status: Postmortem
Impact: Minor | Started At: May 1, 2023, 2:20 p.m.
Description: # What happened On May 1st, 2023, between the hours of 08:23 UTC and 15:08 UTC, imgix experienced intermittent errors affecting a small percentage of non-cached renders. # How were customers impacted? During the affected period, a small percentage of requests to the Rendering API returned a `502` or `503` error for non-cached requests. Errors slowly and gradually increased, with <.5% of requests returning an error at the height of the incident. # What went wrong during the incident? Our upstream provider experienced communication issues between CDN POPs, causing intermittent `502`/`503` responses in a small percentage of requests to our Rendering API. The increase in errors was so minor that it did not meet our monitoring thresholds for triggering alerts. One of our engineers observed a slow increase in errors and alerted other team members to a potential issue with our service. After tracing the issue to our upstream provider, we pushed a patch to mitigate intermittent connectivity issues, resolving the incident. # What will imgix do to prevent this in the future? We have refined our alerting to better catch the slowly increasing error rates. We have also ensured that the root cause of this incident has been fixed by our upstream provider. We are also updating our traffic routing in the case that the upstream issue occurs again.
Status: Postmortem
Impact: Minor | Started At: May 1, 2023, 2:20 p.m.
Description: # What happened? On April 13, 2023, between 17:09 UTC and 17:32 UTC, imgix experienced a partial outage affecting non-cached renders. During this time, requests to cached assets continued to serve a `200` response, while requests to non-cached assets returned a server error. A fix was implemented at 17:32 UTC, restoring service. # How were customers impacted? Between 17:09 UTC and 17:32 UTC, requests to the Rendering API for non-cached renders returned a server error, with 9% of all requests to the Rendering API returning an error at the height of the incident. # What went wrong during the incident? We identified an error in one of our connections to customer origins. This error lead to significant slowdown in the retrieval process of new assets from customer origins. The errors rapidly grew in a short amount of time, causing our Rendering API to return 5xx errors. To restore the service, our engineers redirected some of our network traffic. The service was fully restored by 17:32 UTC, but some errors persisted and were being served from the cache until they were completely cleared at 17:35 UTC. # What will imgix do to prevent this in the future? We have taken the following steps to prevent this issue from re-occurring: * Fixed the misconfigured alert so our monitoring and alerts will trigger and identify potential issues before they become critical. * Removed the connection from our routing, replacing it with a new connection that will not experience the same errors. We are in the process of implementing the following: * Conducting a review of our current tooling to increase our traffic and network configuration capabilities. * Reviewing our current configuration to limit the affected services should a similar incident happen in the future.
Status: Postmortem
Impact: Major | Started At: April 13, 2023, 5:20 p.m.
Description: # What happened? On April 13, 2023, between 17:09 UTC and 17:32 UTC, imgix experienced a partial outage affecting non-cached renders. During this time, requests to cached assets continued to serve a `200` response, while requests to non-cached assets returned a server error. A fix was implemented at 17:32 UTC, restoring service. # How were customers impacted? Between 17:09 UTC and 17:32 UTC, requests to the Rendering API for non-cached renders returned a server error, with 9% of all requests to the Rendering API returning an error at the height of the incident. # What went wrong during the incident? We identified an error in one of our connections to customer origins. This error lead to significant slowdown in the retrieval process of new assets from customer origins. The errors rapidly grew in a short amount of time, causing our Rendering API to return 5xx errors. To restore the service, our engineers redirected some of our network traffic. The service was fully restored by 17:32 UTC, but some errors persisted and were being served from the cache until they were completely cleared at 17:35 UTC. # What will imgix do to prevent this in the future? We have taken the following steps to prevent this issue from re-occurring: * Fixed the misconfigured alert so our monitoring and alerts will trigger and identify potential issues before they become critical. * Removed the connection from our routing, replacing it with a new connection that will not experience the same errors. We are in the process of implementing the following: * Conducting a review of our current tooling to increase our traffic and network configuration capabilities. * Reviewing our current configuration to limit the affected services should a similar incident happen in the future.
Status: Postmortem
Impact: Major | Started At: April 13, 2023, 5:20 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.