Company Logo

Is there an getstream.io outage?

getstream.io status: Systems Active

Last checked: 8 minutes ago

Get notified about any outages, downtime or incidents for getstream.io and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

getstream.io outages and incidents

Outage and incident data over the last 30 days for getstream.io.

There have been 0 outages or incidents for getstream.io in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for getstream.io

Outlogger tracks the status of these components for Xero:

Chat - Edge Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - Edge Active
CDN Active
Dashboard Active
Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - API Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Chat - Edge Active
Feed - API Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Feed - Personalization Active
Feed - Realtime notifications Active
Component Status
Active
Chat - Edge Active
Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - Edge Active
Active
CDN Active
Dashboard Active
Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - API Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Active
Chat - Edge Active
Feed - API Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Feed - Personalization Active
Feed - Realtime notifications Active

Latest getstream.io outages and incidents.

View the latest incidents for getstream.io and check for official updates:

Updates:

  • Time: Feb. 16, 2017, 3:29 p.m.
    Status: Resolved
    Update: At 2:41 PM UTC we experienced approximately 60 seconds of downtime with the Stream API. This was caused by a database lock not being released in a timely manner. Regular service was restored immediately after the lock was released.

Updates:

  • Time: Feb. 14, 2017, 8:35 a.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: Feb. 13, 2017, 9:52 p.m.
    Status: Monitoring
    Update: A fix has been implemented and we are monitoring the results.
  • Time: Feb. 13, 2017, 8:19 p.m.
    Status: Identified
    Update: One the Cassandra servers was very slow at serving queries due to long stop of the world GC.
  • Time: Feb. 13, 2017, 8:09 p.m.
    Status: Investigating
    Update: We are investigating on this issue.

Updates:

  • Time: July 30, 2018, 7:30 p.m.
    Status: Postmortem
    Update: We've received additional information from AWS about this outage. To summarize, the RDS monitoring process and DB instance both failed causing a delay in automated failover. ===== Thank you for contacting AWS Premium Support. I understand that your RDS instance was not reachable from 1:23 to 1:41 UTC on 9th of February 2017 and you want to know the cause for it. I have investigated your RDS instance and following is my analysis: --> 2017-02-09 01:25:26 External Monitoring process is unable to communicate with monitoring service on your instance --> Due to the communication issues talking to the monitoring process on the instance, the failover was getting delayed until the hard limit was reached from the external monitoring process. Before External Monitoring process forces failover you did a manual reboot with failover at around 2017-02-09 01:40:42 UTC. --> That was the reason CloudWatch metrics was not available during that time period but it started uploading after it failed over to standby DB instance. --> After making sure new primary DB instance is up to date with the old primary DB instance, RDS issued replace DB instance. --> Replace DB instance workflow has deleted the faulty instance (old primary) and replace it with new instance. Then it will sync up with the primary DB instance. --> This process (Replace DB instance) completed successfully at 2017-02-09 1:57:45 UTC. However, during this process DB instance was available for reads and writes. Normally the failover will be triggered shortly within few minutes and this time it's indeed abnormal. It rarely happens and we do apologize for any inconvenience that this issue might have caused on your environment. The RDS team always works hard on improving the stability and reliability of the RDS service but sometimes failure do occur. Our sincerest apologies for the operational pain that was caused you and please let me know if there is anything else I can assist with.
  • Time: Feb. 9, 2017, 1:49 a.m.
    Status: Resolved
    Update: The problem was related to a hardware failure with one of our databases. The faulty server was replaced with a hot backup.
  • Time: Feb. 9, 2017, 1:41 a.m.
    Status: Investigating
    Update: We are investigating an outage on the API

Updates:

  • Time: July 30, 2018, 7:30 p.m.
    Status: Postmortem
    Update: ## The problem Due to what it seems to be a bug with EC2 Security Groups, the connectivity between one API server and one Redis backend was impaired. This connectivity issue resulted in API requests waiting until a hard-timeout occurred. At its peak 1% of all API calls were affected and either returned a 502 error code or raised client-side timeout exceptions. ## Mitigation Once the problem was clear, the EC2 server with the configuration problem was removed from the load balancer, this immediately resolved the problem. ## Solution We are talking with AWS support to isolate and validate this problem; in the meantime we instrumented all our API servers to pro-actively check for this specific issue and decommission servers experiencing the same problem.
  • Time: Dec. 21, 2016, 9:55 a.m.
    Status: Resolved
    Update: The issue has been resolved. We're still investigating the root cause.
  • Time: Dec. 21, 2016, 9:14 a.m.
    Status: Investigating
    Update: We're investigating a slowdown on our main feed API endpoint.

Updates:

  • Time: Nov. 14, 2016, 3:03 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: Nov. 14, 2016, 3 p.m.
    Status: Identified
    Update: The issue has been identified and a fix is being implemented.
  • Time: Nov. 14, 2016, 3 p.m.
    Status: Identified
    Update: The issue has been identified and a fix is being implemented.
  • Time: Nov. 14, 2016, 2:58 p.m.
    Status: Investigating
    Update: We are investigating a degradation of service
  • Time: Nov. 14, 2016, 2:58 p.m.
    Status: Investigating
    Update: We are investigating a degradation of service

Check the status of similar companies and alternatives to getstream.io

Discord
Discord

Systems Active

Aircall
Aircall

Systems Active

Sinch
Sinch

Issues Detected

CallRail
CallRail

Systems Active

Phone.com
Phone.com

Systems Active

Mattermost
Mattermost

Systems Active

Dubber
Dubber

Systems Active

Netomi
Netomi

Systems Active

Convoso
Convoso

Systems Active

Plex
Plex

Systems Active

Helpshift
Helpshift

Systems Active

Sedna Systems
Sedna Systems

Systems Active

Frequently Asked Questions - getstream.io

Is there a getstream.io outage?
The current status of getstream.io is: Systems Active
Where can I find the official status page of getstream.io?
The official status page for getstream.io is here
How can I get notified if getstream.io is down or experiencing an outage?
To get notified of any status changes to getstream.io, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of getstream.io every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does getstream.io do?
A versatile API that enables the creation of social networks, activity feeds, activity streams, and chat apps with speed and scalability.