Last checked: 7 minutes ago
Get notified about any outages, downtime or incidents for imgix and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for imgix.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
API Service | Active |
Docs | Active |
Purging | Active |
Rendering Infrastructure | Active |
Sandbox | Active |
Stripe API | Active |
Web Administration Tools | Active |
Content Delivery Network | Active |
Amsterdam (AMS) | Active |
Ashburn (BWI) | Active |
Ashburn (DCA) | Active |
Ashburn (IAD) | Active |
Atlanta (ATL) | Active |
Atlanta (FTY) | Active |
Atlanta (PDK) | Active |
Auckland (AKL) | Active |
Boston (BOS) | Active |
Brisbane (BNE) | Active |
Buenos Aires (EZE) | Active |
Cape Town (CPT) | Active |
Chennai (MAA) | Active |
Chicago (CHI) | Active |
Chicago (MDW) | Active |
Chicago (ORD) | Active |
Columbus (CMH) | Active |
Content Delivery Network | Active |
Copenhagen (CPH) | Active |
Curitiba (CWB) | Active |
Dallas (DAL) | Active |
Dallas (DFW) | Active |
Denver (DEN) | Active |
Dubai (FJR) | Active |
Frankfurt (FRA) | Active |
Frankfurt (HHN) | Active |
Helsinki (HEL) | Active |
Hong Kong (HKG) | Active |
Houston (IAH) | Active |
Johannesburg (JNB) | Active |
London (LCY) | Active |
London (LHR) | Active |
Los Angeles (BUR) | Active |
Los Angeles (LAX) | Active |
Madrid (MAD) | Active |
Melbourne (MEL) | Active |
Miami (MIA) | Active |
Milan (MXP) | Active |
Minneapolis (MSP) | Active |
Montreal (YUL) | Active |
Mumbai (BOM) | Active |
Newark (EWR) | Active |
New York (JFK) | Active |
New York (LGA) | Active |
Osaka (ITM) | Active |
Palo Alto (PAO) | Active |
Paris (CDG) | Active |
Perth (PER) | Active |
Rio de Janeiro (GIG) | Active |
San Jose (SJC) | Active |
Santiago (SCL) | Active |
Sāo Paulo (GRU) | Active |
Seattle (SEA) | Active |
Singapore (SIN) | Active |
Stockholm (BMA) | Active |
Sydney (SYD) | Active |
Tokyo (HND) | Active |
Tokyo (NRT) | Active |
Tokyo (TYO) | Active |
Toronto (YYZ) | Active |
Vancouver (YVR) | Active |
Wellington (WLG) | Active |
DNS | Active |
imgix DNS Network | Active |
NS1 Global DNS Network | Active |
Docs | Active |
Netlify Content Distribution Network | Active |
Netlify Origin Servers | Active |
Storage Backends | Active |
Google Cloud Storage | Active |
s3-ap-northeast-1 | Active |
s3-ap-northeast-2 | Active |
s3-ap-southeast-1 | Active |
s3-ap-southeast-2 | Active |
s3-ca-central-1 | Active |
s3-eu-central-1 | Active |
s3-eu-west-1 | Active |
s3-eu-west-2 | Active |
s3-eu-west-3 | Active |
s3-sa-east-1 | Active |
s3-us-east-2 | Active |
s3-us-standard | Active |
s3-us-west-1 | Active |
s3-us-west-2 | Active |
View the latest incidents for imgix and check for official updates:
Description: # What happened? On August 10, 2021 19:05 UTC, our CDN provider experienced a brief outage which resulted in elevated rendering rates from the imgix service. Error rates returned to almost normal levels by 19:30 UTC with a small percentage of errors continuing to occur in imgix. By 19:58 UTC, error rates were restored to completely normal levels, though there continued to be non-user affecting errors appearing in our stack. Our team continued to apply mitigations and fixes, with the incident being marked as fully resolved on August 11, 2:15 UTC. # How were customers impacted? Between the times of 19:05 UTC and 19:30 UTC, users experienced elevated render rates for non-cached images for requests to the imgix service. At the height of the incident \(19:12 UTC\), 11% of requests to imgix received a `503` response. After 19:12 UTC, errors sharply dropped to 5% and continued to drop until being restored to almost normal levels by 19:30 UTC. By this time, there were only a small percentage of errors that continued to occur for requests \(<1%\). Ongoing work fully restored the rendering service by 19:58 UTC. From this time until the incident was resolved at August 11, 2:15 UTC, backend errors continued to occur, though these errors did not have an impact on image deliverability. # What went wrong during the incident? At 19:05 UTC, our CDN provider posted a status update concerning performance impact to their CDN services, which subsequently affected imgix services by elevating error rates. Our monitoring tools alerted our engineering team to the elevating error rates, which allowed us to apply quick mitigations to control the growth of errors. Our own status page was updated at 19:16 UTC. Thanks to the mitigations applied by both our CDN provider and our engineering team, the service began to recover at 19:30 UTC, with just a small percentage of errors that had persisted. Our team continued to apply changes to sustain mitigations, with errors being restored normal levels by 19:58 UTC. Though rendering had been restored, non-end user facing errors continued to surface within our infrastructure. Our team continued to investigate and apply fixes, though erratic behaviors continued for a much longer time than anticipated as a result of the initial outage. Eventually, the incident was marked as resolved on August 11, 2:15 UTC. # What will imgix do to prevent this in the future? This incident exposed an issue with brief CDN service outages causing lengthy incident times for our rendering service. We will tune our infrastructure and we’ll investigate further to explore opportunities for mitigating the after effects of CDN outages on imgix services.
Status: Postmortem
Impact: Major | Started At: Aug. 10, 2021, 7:16 p.m.
Description: # What happened? On August 10, 2021 19:05 UTC, our CDN provider experienced a brief outage which resulted in elevated rendering rates from the imgix service. Error rates returned to almost normal levels by 19:30 UTC with a small percentage of errors continuing to occur in imgix. By 19:58 UTC, error rates were restored to completely normal levels, though there continued to be non-user affecting errors appearing in our stack. Our team continued to apply mitigations and fixes, with the incident being marked as fully resolved on August 11, 2:15 UTC. # How were customers impacted? Between the times of 19:05 UTC and 19:30 UTC, users experienced elevated render rates for non-cached images for requests to the imgix service. At the height of the incident \(19:12 UTC\), 11% of requests to imgix received a `503` response. After 19:12 UTC, errors sharply dropped to 5% and continued to drop until being restored to almost normal levels by 19:30 UTC. By this time, there were only a small percentage of errors that continued to occur for requests \(<1%\). Ongoing work fully restored the rendering service by 19:58 UTC. From this time until the incident was resolved at August 11, 2:15 UTC, backend errors continued to occur, though these errors did not have an impact on image deliverability. # What went wrong during the incident? At 19:05 UTC, our CDN provider posted a status update concerning performance impact to their CDN services, which subsequently affected imgix services by elevating error rates. Our monitoring tools alerted our engineering team to the elevating error rates, which allowed us to apply quick mitigations to control the growth of errors. Our own status page was updated at 19:16 UTC. Thanks to the mitigations applied by both our CDN provider and our engineering team, the service began to recover at 19:30 UTC, with just a small percentage of errors that had persisted. Our team continued to apply changes to sustain mitigations, with errors being restored normal levels by 19:58 UTC. Though rendering had been restored, non-end user facing errors continued to surface within our infrastructure. Our team continued to investigate and apply fixes, though erratic behaviors continued for a much longer time than anticipated as a result of the initial outage. Eventually, the incident was marked as resolved on August 11, 2:15 UTC. # What will imgix do to prevent this in the future? This incident exposed an issue with brief CDN service outages causing lengthy incident times for our rendering service. We will tune our infrastructure and we’ll investigate further to explore opportunities for mitigating the after effects of CDN outages on imgix services.
Status: Postmortem
Impact: Major | Started At: Aug. 10, 2021, 7:16 p.m.
Description: Service has been completely restored.
Status: Resolved
Impact: Minor | Started At: June 24, 2021, 11:02 p.m.
Description: # What happened? On June 10, 2021, between the hours of 1:50 UTC and 2:15 UTC, the rendering API experienced significant rendering errors for uncached derivative images. The issue was identified and fixed, though a small percentage \(<.01%\) of renders continued to return errors until another fix was pushed out at 2:54 UTC. The incident was marked as fully resolved at 4:10 UTC. # How were customers impacted? On June 10 between 1:50 UTC and 2:15 UTC, a significant amount of requests to uncached derivative images returned 503 errors. At its peak, 6% of all requests to imgix returned an error. A fix began being implemented at 2:10 UTC and was fully rolled out by 2:15 UTC. Errors had returned to almost normal rates \(<0.01%\) after the time of the fix. A later patched restored the entirety of the service to normal at 2:54 UTC. # What went wrong during the incident? Our engineers were alerted to an increasing amount of elevated error responses from an internal service. Investigating the issue, our engineers identified that a misconfiguration during routine network maintenance had caused a DNS-related failure within our infrastructure. During our investigation, we found that our failover systems had not mitigated the issue as expected. Our engineers immediately corrected the misconfiguration and restored DNS, which restored the majority of service. After service was restored, our engineers detected rendering instability affecting a very small percentage of images. Our engineering team continued to investigate and was able to push out a fix by 2:54 UTC. # What will imgix do to prevent this in the future? We will revisit current workflows and standard operating procedures to perform an architectural review of system dependencies. In addition, imgix plans to improve coordination regarding scheduled maintenance to avoid service disruptions related to network changes.
Status: Postmortem
Impact: Critical | Started At: June 10, 2021, 1:58 a.m.
Description: # What happened? On June 10, 2021, between the hours of 1:50 UTC and 2:15 UTC, the rendering API experienced significant rendering errors for uncached derivative images. The issue was identified and fixed, though a small percentage \(<.01%\) of renders continued to return errors until another fix was pushed out at 2:54 UTC. The incident was marked as fully resolved at 4:10 UTC. # How were customers impacted? On June 10 between 1:50 UTC and 2:15 UTC, a significant amount of requests to uncached derivative images returned 503 errors. At its peak, 6% of all requests to imgix returned an error. A fix began being implemented at 2:10 UTC and was fully rolled out by 2:15 UTC. Errors had returned to almost normal rates \(<0.01%\) after the time of the fix. A later patched restored the entirety of the service to normal at 2:54 UTC. # What went wrong during the incident? Our engineers were alerted to an increasing amount of elevated error responses from an internal service. Investigating the issue, our engineers identified that a misconfiguration during routine network maintenance had caused a DNS-related failure within our infrastructure. During our investigation, we found that our failover systems had not mitigated the issue as expected. Our engineers immediately corrected the misconfiguration and restored DNS, which restored the majority of service. After service was restored, our engineers detected rendering instability affecting a very small percentage of images. Our engineering team continued to investigate and was able to push out a fix by 2:54 UTC. # What will imgix do to prevent this in the future? We will revisit current workflows and standard operating procedures to perform an architectural review of system dependencies. In addition, imgix plans to improve coordination regarding scheduled maintenance to avoid service disruptions related to network changes.
Status: Postmortem
Impact: Critical | Started At: June 10, 2021, 1:58 a.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.