Last checked: 2 minutes ago
Get notified about any outages, downtime or incidents for imgix and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for imgix.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
API Service | Active |
Docs | Active |
Purging | Active |
Rendering Infrastructure | Active |
Sandbox | Active |
Stripe API | Active |
Web Administration Tools | Active |
Content Delivery Network | Active |
Amsterdam (AMS) | Active |
Ashburn (BWI) | Active |
Ashburn (DCA) | Active |
Ashburn (IAD) | Active |
Atlanta (ATL) | Active |
Atlanta (FTY) | Active |
Atlanta (PDK) | Active |
Auckland (AKL) | Active |
Boston (BOS) | Active |
Brisbane (BNE) | Active |
Buenos Aires (EZE) | Active |
Cape Town (CPT) | Active |
Chennai (MAA) | Active |
Chicago (CHI) | Active |
Chicago (MDW) | Active |
Chicago (ORD) | Active |
Columbus (CMH) | Active |
Content Delivery Network | Active |
Copenhagen (CPH) | Active |
Curitiba (CWB) | Active |
Dallas (DAL) | Active |
Dallas (DFW) | Active |
Denver (DEN) | Active |
Dubai (FJR) | Active |
Frankfurt (FRA) | Active |
Frankfurt (HHN) | Active |
Helsinki (HEL) | Active |
Hong Kong (HKG) | Active |
Houston (IAH) | Active |
Johannesburg (JNB) | Active |
London (LCY) | Active |
London (LHR) | Active |
Los Angeles (BUR) | Active |
Los Angeles (LAX) | Active |
Madrid (MAD) | Active |
Melbourne (MEL) | Active |
Miami (MIA) | Active |
Milan (MXP) | Active |
Minneapolis (MSP) | Active |
Montreal (YUL) | Active |
Mumbai (BOM) | Active |
Newark (EWR) | Active |
New York (JFK) | Active |
New York (LGA) | Active |
Osaka (ITM) | Active |
Palo Alto (PAO) | Active |
Paris (CDG) | Active |
Perth (PER) | Active |
Rio de Janeiro (GIG) | Active |
San Jose (SJC) | Active |
Santiago (SCL) | Active |
Sāo Paulo (GRU) | Active |
Seattle (SEA) | Active |
Singapore (SIN) | Active |
Stockholm (BMA) | Active |
Sydney (SYD) | Active |
Tokyo (HND) | Active |
Tokyo (NRT) | Active |
Tokyo (TYO) | Active |
Toronto (YYZ) | Active |
Vancouver (YVR) | Active |
Wellington (WLG) | Active |
DNS | Active |
imgix DNS Network | Active |
NS1 Global DNS Network | Active |
Docs | Active |
Netlify Content Distribution Network | Active |
Netlify Origin Servers | Active |
Storage Backends | Active |
Google Cloud Storage | Active |
s3-ap-northeast-1 | Active |
s3-ap-northeast-2 | Active |
s3-ap-southeast-1 | Active |
s3-ap-southeast-2 | Active |
s3-ca-central-1 | Active |
s3-eu-central-1 | Active |
s3-eu-west-1 | Active |
s3-eu-west-2 | Active |
s3-eu-west-3 | Active |
s3-sa-east-1 | Active |
s3-us-east-2 | Active |
s3-us-standard | Active |
s3-us-west-1 | Active |
s3-us-west-2 | Active |
View the latest incidents for imgix and check for official updates:
Description: # Postmortem # **What happened?** On Feb 20, 2024, at 19:00 UTC, Uploads using the \`/api/v1/sources/upload/\` API endpoint experienced slowness and some timeouts during this time. The issue had been completely resolved by Feb 21, 2024, 20:00 UTC. # **How were customers impacted?** Between Feb 20, 2024, 19:00 UTC, and Feb 21, 2024, 20:00 UTC, some requests to our upload API experienced slow responses, and a subset of requests resulted in timeout errors. Uploads using the \`/api/v1/sources/<source\_id>/upload-sessions/\` endpoints were unaffected; uploads using the Asset Manager UI were unaffected. # **What went wrong during the incident?** Two compounding issues caused the slowdown. The first is a service update creating an issue with our /api/v1/sources/upload/\` API endpoint. Secondly, a simultaneous and separate slowdown affected the cloud servers responsible for executing the upload actions. The service update issue increased the load on our upload function, which was already under strain due to the cloud servers experiencing a slowdown. These factors combined to cause the slow and timed-out responses. # **What will imgix do to prevent this in the future?** We have streamlined the upload process so that this cannot happen again.
Status: Postmortem
Impact: None | Started At: Feb. 20, 2024, 5 p.m.
Description: # Postmortem # **What happened?** On January 22, 2024, 07:38 UTC, some Web Folder requests to the imgix service began to return a `403` response. By 11:56 UTC, the issue had been completely resolved. # **How were customers impacted?** Between 07:38 UTC and 11:36 UTC, several requests to web folder sources began to return a `403` error. This affected a small amount of assets \(<0.1%\). # **What went wrong during the incident?** A service update caused an issue with web folders with directories using double slashes at the Origin URL \(`//`\). This update caused an encoding error, leading to `403` responses for Origins matching the double slash pattern. A fix was pushed to resolve this URL pattern, allowing us to fetch images from affected Origins. # **What will imgix do to prevent this in the future?** We will update our tests to catch encoding issues in the future.
Status: Postmortem
Impact: None | Started At: Jan. 22, 2024, 9 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: Nov. 11, 2023, 8:06 a.m.
Description: # What happened? On October 23, 2023, between 21:43 UTC and 23:14 UTC, imgix experienced a partial outage affecting images served from the Rendering API. During this time, a small percentage \(<0.45% on average\) of non-cached requests returned a server error. A fix was implemented at 23:02 UTC, which allowed the service to recover by 23:14 UTC fully. # How were customers impacted? Between 21:46 UTC and 23:14 UTC, requests to the Rendering API returned a server error, with 0.65% of all requests to our CDN returning an error at the height of the incident. Additionally, Sources returned an unknown status between 21:06 UTC to 21:09 UTC. During this period, customers reported being unable to create Sources. # What went wrong during the incident? Our Rendering API experienced an unexpected interaction that caused a dramatic increase in server load. This caused error rates to increase as the network became overloaded slowly. The errors fluctuated between 0.07% to 0.65% until we resolved the issue. To restore the service, our engineers re-configured our network traffic to handle the unexpected Rendering behavior. During the incident, a separate issue \(unrelated to rendering\) impacted our Source data. This led to a delay in investigating the cause of the rendering errors. # What will imgix do to prevent this in the future? We have taken the following steps to prevent this issue from recurring: * Fixed the misconfigured server interaction * We will put an alert system in place to notify us when traffic congestion happens from a misconfigured source interaction. We are in the process of implementing the following: * Conducting a review of our current tooling to increase our traffic and network configuration capabilities. * Reviewing our current configuration to limit the affected services should a similar incident happen.
Status: Postmortem
Impact: Minor | Started At: Oct. 23, 2023, 10:34 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: Aug. 7, 2023, 2:19 a.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.