Company Logo

Is there an getstream.io outage?

getstream.io status: Systems Active

Last checked: 8 minutes ago

Get notified about any outages, downtime or incidents for getstream.io and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

getstream.io outages and incidents

Outage and incident data over the last 30 days for getstream.io.

There have been 0 outages or incidents for getstream.io in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for getstream.io

Outlogger tracks the status of these components for Xero:

Chat - Edge Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - Edge Active
CDN Active
Dashboard Active
Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - API Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Chat - Edge Active
Chat - API Active
Chat - Edge Active
Chat - Edge Active
Feed - API Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Feed - Personalization Active
Feed - Realtime notifications Active
Component Status
Active
Chat - Edge Active
Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - Edge Active
Active
CDN Active
Dashboard Active
Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - API Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Active
Chat - Edge Active
Active
Chat - API Active
Chat - Edge Active
Active
Chat - Edge Active
Feed - API Active
Active
Chat - API Active
Chat - Edge Active
Feed - API Active
Feed - Personalization Active
Feed - Realtime notifications Active

Latest getstream.io outages and incidents.

View the latest incidents for getstream.io and check for official updates:

Updates:

  • Time: Feb. 3, 2018, 6:46 p.m.
    Status: Resolved
    Update: A small part of Stream's customers was impacted by an issue that caused notification feeds to be temporarily unavailable. Earlier today at 15:30 GMT a maintenance database migration was initiated; unfortunately the procedure did not clear the cached state of feeds correctly. This lead to reads not showing activities from before that time. At 16:00 UTC we remediated by flushing the stale cache which immediately fixed the problem. The issue did not affect any write operations such as adding/removing activities or following/unfollowing feeds. After some investigation we could find and amended the incorrect part of the maintenance operation. Apologies for the trouble and for bringing this up during the week-end.

Updates:

  • Time: July 30, 2018, 7:30 p.m.
    Status: Postmortem
    Update: ### January 23 and 24 outage postmortem report Stream suffered two incidents of degraded performance in the past 24 hours. We take uptime very seriously, and would like to be transparent with our operations and to our customers. The spikes occurred on Jan 23 at 3:50PM UTC and on Jan 24 at 11:45AM UTC. Both spikes were caused by a sudden increase of pressure to one of our PostgreSQL databases. Because Postgresql was slow at serving queries, HTTP requests started to pile up and eventually saturated the API workers' connection backlogs. API clients using a very low timeout will have encountered timeout exceptions. Other users of Stream would see 5xx responses on part of their API calls. I am going to add a little bit of background so that it is easier to elaborate on what went wrong. Some of our internal operations rely on moving data from one PostgreSQL database to another. Thanks to `psql` such operation is routinely performed by pipe-ing `COPY TO STDOUT` and `COPY FROM STDOUT` together. In order to not pressure the destination database with writes: we also use `pv` so that we are sure we never end up consuming all our IOPS capacity. The command looks more or less like this: ```psql src_db -c '\copy (...) to stdout' | pv -p --rate-limit 5242880 | psql dst_db -c '\copy (...) from stdout'``` By terminating the same copy command running on the source database we were able to remove **write** pressure on the disk. After that the high latency problem affecting the API service was automatically resolved. After researching on other possible causes, we concluded that the pressure created by the copy command combined with increased traffic was behind this outage and picked a different time to do it again. The same operation was then restarted during low traffic hours. To our surprise write pressure increased again after a couple of hours on the source database and caused another, albeit shorter, outage. After more digging we realized that in both occasions the command was running in the background and could not write to stdout, forcing the source database to store the query results on disk and then causing very high I/O on disk and slow response times to regular traffic. The remediation for this is very straight forward and it simply requires to never block stdout on the source database.
  • Time: Jan. 24, 2018, 12:09 p.m.
    Status: Resolved
    Update: Between 11:43 UTC and 11:47 API traffic had a spike in HTTP 5xx errors and an increase of latency.

Updates:

  • Time: Jan. 23, 2018, 4:34 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: Jan. 23, 2018, 4:31 p.m.
    Status: Monitoring
    Update: API service is back to normal since 3:57PM UTC. Outage was due to temporary performance degradation of a Postgresql server.
  • Time: Jan. 23, 2018, 4:12 p.m.
    Status: Investigating
    Update: The failures seem to have stopped. We're still looking into the root cause.
  • Time: Jan. 23, 2018, 3:56 p.m.
    Status: Investigating
    Update: Intermittent failures, cause is not known yet.

Updates:

  • Time: Dec. 26, 2017, 10:30 p.m.
    Status: Resolved
    Update: The issue was mitigated and we are now working on a permanent solution.
  • Time: Dec. 26, 2017, 9:45 p.m.
    Status: Identified
    Update: Our main PG database (that holds the configs) is seeing high CPU. This is causing a percentage of API requests to fail. We're investigating this issue and will keep you posted.

Updates:

  • Time: July 30, 2018, 7:30 p.m.
    Status: Postmortem
    Update: # The issue From 19:16 to 19:24 UTC and from 19:30 to 19:46 UTC we had a high number HTTP 502 errors when connecting to the API. # The causes A change was made to our servers SSH configuration that was thought not to have any effect. However, on newly provisioned servers it caused a failure to start the server process. Normally this wouldn't have caused a big problem, because the load balancer should mark the host as unhealthy and thus no traffic should be sent there. Unfortunately, this was not the case because of a recent change in the health check logic. This change wrongly reported the server as healthy even though the server process was down. # The fixes First we removed the bad servers manually from the load balancer. After that we fixed the problem with the SSH configuration and added the servers back to the load balancer. Finally we changed the health check to not report healthy when the server process is down. Our apologies about the outage, our team is hard at work to further improve stability.
  • Time: Oct. 10, 2017, 8 p.m.
    Status: Resolved
    Update: The issue has been resolved, more information about the outage will follow shortly.
  • Time: Oct. 10, 2017, 7:50 p.m.
    Status: Investigating
    Update: We're currently investigating a high error rate on the APIs. A percentage of requests to the API are returning 502s, the cause is not yet identified.

Check the status of similar companies and alternatives to getstream.io

Discord
Discord

Systems Active

Aircall
Aircall

Systems Active

Sinch
Sinch

Issues Detected

CallRail
CallRail

Systems Active

Phone.com
Phone.com

Systems Active

Mattermost
Mattermost

Systems Active

Dubber
Dubber

Systems Active

Netomi
Netomi

Systems Active

Convoso
Convoso

Systems Active

Plex
Plex

Systems Active

Helpshift
Helpshift

Systems Active

Sedna Systems
Sedna Systems

Systems Active

Frequently Asked Questions - getstream.io

Is there a getstream.io outage?
The current status of getstream.io is: Systems Active
Where can I find the official status page of getstream.io?
The official status page for getstream.io is here
How can I get notified if getstream.io is down or experiencing an outage?
To get notified of any status changes to getstream.io, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of getstream.io every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does getstream.io do?
A versatile API that enables the creation of social networks, activity feeds, activity streams, and chat apps with speed and scalability.