Last checked: 8 minutes ago
Get notified about any outages, downtime or incidents for getstream.io and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for getstream.io.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
Brazil (São Paulo) | Active |
Chat - Edge | Active |
Canada | Active |
Chat - Edge | Active |
Dublin | Active |
Chat - API | Active |
Chat - Edge | Active |
Feed - API | Active |
Frankfurt | Active |
Chat - Edge | Active |
Global services | Active |
CDN | Active |
Dashboard | Active |
Edge | Active |
Mumbai | Active |
Chat - API | Active |
Chat - Edge | Active |
Feed - API | Active |
Ohio | Active |
Chat - API | Active |
Chat - Edge | Active |
Singapore | Active |
Chat - API | Active |
Chat - Edge | Active |
Feed - API | Active |
South Africa (Cape Town) | Active |
Chat - Edge | Active |
Sydney | Active |
Chat - API | Active |
Chat - Edge | Active |
Tokyo | Active |
Chat - Edge | Active |
Feed - API | Active |
US-East | Active |
Chat - API | Active |
Chat - Edge | Active |
Feed - API | Active |
Feed - Personalization | Active |
Feed - Realtime notifications | Active |
View the latest incidents for getstream.io and check for official updates:
Description: At 2:41 PM UTC we experienced approximately 60 seconds of downtime with the Stream API. This was caused by a database lock not being released in a timely manner. Regular service was restored immediately after the lock was released.
Status: Resolved
Impact: None | Started At: Feb. 16, 2017, 3:29 p.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: Feb. 13, 2017, 8:09 p.m.
Description: We've received additional information from AWS about this outage. To summarize, the RDS monitoring process and DB instance both failed causing a delay in automated failover. ===== Thank you for contacting AWS Premium Support. I understand that your RDS instance was not reachable from 1:23 to 1:41 UTC on 9th of February 2017 and you want to know the cause for it. I have investigated your RDS instance and following is my analysis: --> 2017-02-09 01:25:26 External Monitoring process is unable to communicate with monitoring service on your instance --> Due to the communication issues talking to the monitoring process on the instance, the failover was getting delayed until the hard limit was reached from the external monitoring process. Before External Monitoring process forces failover you did a manual reboot with failover at around 2017-02-09 01:40:42 UTC. --> That was the reason CloudWatch metrics was not available during that time period but it started uploading after it failed over to standby DB instance. --> After making sure new primary DB instance is up to date with the old primary DB instance, RDS issued replace DB instance. --> Replace DB instance workflow has deleted the faulty instance (old primary) and replace it with new instance. Then it will sync up with the primary DB instance. --> This process (Replace DB instance) completed successfully at 2017-02-09 1:57:45 UTC. However, during this process DB instance was available for reads and writes. Normally the failover will be triggered shortly within few minutes and this time it's indeed abnormal. It rarely happens and we do apologize for any inconvenience that this issue might have caused on your environment. The RDS team always works hard on improving the stability and reliability of the RDS service but sometimes failure do occur. Our sincerest apologies for the operational pain that was caused you and please let me know if there is anything else I can assist with.
Status: Postmortem
Impact: Major | Started At: Feb. 9, 2017, 1:41 a.m.
Description: ## The problem Due to what it seems to be a bug with EC2 Security Groups, the connectivity between one API server and one Redis backend was impaired. This connectivity issue resulted in API requests waiting until a hard-timeout occurred. At its peak 1% of all API calls were affected and either returned a 502 error code or raised client-side timeout exceptions. ## Mitigation Once the problem was clear, the EC2 server with the configuration problem was removed from the load balancer, this immediately resolved the problem. ## Solution We are talking with AWS support to isolate and validate this problem; in the meantime we instrumented all our API servers to pro-actively check for this specific issue and decommission servers experiencing the same problem.
Status: Postmortem
Impact: None | Started At: Dec. 21, 2016, 9:14 a.m.
Description: This incident has been resolved.
Status: Resolved
Impact: Minor | Started At: Nov. 14, 2016, 2:58 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.