Company Logo

Is there an PostHog outage?

PostHog status: Systems Active

Last checked: 8ย minutes ago

Get notified about any outages, downtime or incidents for PostHog and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

PostHog outages and incidents

Outage and incident data over the last 30 days for PostHog.

There have been 12 outages or incidents for PostHog in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for PostHog

Outlogger tracks the status of these components for Xero:

PostHog.com Active
AWS ec2-eu-central-1 Active
AWS elasticache-eu-central-1 Active
AWS elb-eu-central-1 Active
AWS kafka-eu-central-1 Active
AWS rds-eu-central-1 Active
AWS ec2-us-east-1 Active
AWS elasticache-us-east-1 Active
AWS elb-us-east-1 Active
AWS kafka-us-east-1 Active
AWS rds-us-east-1 Active
App Active
Event and Data Ingestion Active
Feature Flags and Experiments Active
Session Replay Ingestion Active
License Server Active
Update Service Active
App Active
Event and Data Ingestion Active
Feature Flags and Experiments Active
Session Replay Ingestion Active
Component Status
PostHog.com Active
Active
AWS ec2-eu-central-1 Active
AWS elasticache-eu-central-1 Active
AWS elb-eu-central-1 Active
AWS kafka-eu-central-1 Active
AWS rds-eu-central-1 Active
Active
AWS ec2-us-east-1 Active
AWS elasticache-us-east-1 Active
AWS elb-us-east-1 Active
AWS kafka-us-east-1 Active
AWS rds-us-east-1 Active
Active
App Active
Event and Data Ingestion Active
Feature Flags and Experiments Active
Session Replay Ingestion Active
Active
License Server Active
Update Service Active
Active
App Active
Event and Data Ingestion Active
Feature Flags and Experiments Active
Session Replay Ingestion Active

Latest PostHog outages and incidents.

View the latest incidents for PostHog and check for official updates:

Updates:

  • Time: July 23, 2024, 1:15 a.m.
    Status: Resolved
    Update: We are all caught up on all replicas and all events are accounted for. We've also identified exactly what the root cause is that caused this per partition lag and have mitigating steps in place that will prevent us from having this issue again. We apologize for any inconvenience this may have caused.
  • Time: July 22, 2024, 5:33 p.m.
    Status: Monitoring
    Update: We've recovered our backlog on all but one instance which is going slower than anticipated. All recent events that have come in since Saturday are up to date, but a small percentage of events that came in on Thursday - Friday of last week are still in flight. As long as events continue to be processed on this node at the current rate we will be fully up to date by the end of the day.
  • Time: July 21, 2024, 2:33 p.m.
    Status: Monitoring
    Update: We will be 100% caught up on all events by the end of day today. We'll send out another status update as soon as that backfill is complete.
  • Time: July 19, 2024, 10:53 p.m.
    Status: Monitoring
    Update: We've identified the root cause of the issue and have mitigated it. We have also kicked off a backfill that will run over the weekend. We are shooting to have all events back in order and up to date by Monday morning. Expect updates over the weekend on the progress of the backfill of missing events. Thank you all for your patience and we hope you enjoy the rest of your Friday and the weekend!
  • Time: July 19, 2024, 11:36 a.m.
    Status: Investigating
    Update: We continue investigating. We are close to understand the reason behind the event ingestion problem. It seems the root cause is not in the Kafka table engine, but on our write path to the distributed tables. Events ingestion has been resumed, but it's going slowly to avoid those events disappearing, so there will be lag in the ingestion for some hours. We are working on pushing another patch to fix the lag. After that is solved, we'll start the event backfill for the missing dates.
  • Time: July 18, 2024, 4:51 p.m.
    Status: Investigating
    Update: We are investigating an issue with our kafka table engines and have purposely induced lag on our pipeline. All events are safe and will show up after this investigation, but for the moment we will fall behind on processing events and you will notice the last few hours missing in your reporting.
  • Time: July 18, 2024, 1:29 p.m.
    Status: Investigating
    Update: We have started event recovery. Data may be missing since 2024-07-17 at 21:00 UTC. The missing events will eventually be available for querying. We are now working on pushing a fix to avoid this happening again.
  • Time: July 18, 2024, 12:34 p.m.
    Status: Investigating
    Update: We've spotted that the events ingested are lower than expected. We are identifying the root cause of the issue. No data have been lost and we are already tracing a plan to recover it, identifying the impacted dates.

Updates:

  • Time: July 16, 2024, 2:07 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: July 16, 2024, 1:02 p.m.
    Status: Monitoring
    Update: Recovery is continuing well and we expect to be caught up within an hour. Sorry for the interruption!
  • Time: July 16, 2024, 12:38 p.m.
    Status: Identified
    Update: We've restarted our recordings ingestion infrastructure and ingestion is recovering. Folk will be experiencing between 40 minutes and 90 minutes of delay but that's already recovering quickly. We're still looking into the root cause
  • Time: July 16, 2024, 12:21 p.m.
    Status: Investigating
    Update: We've spotted that recordings ingestion is delayed. we're investigating to identify why

Updates:

  • Time: July 14, 2024, 7:53 p.m.
    Status: Resolved
    Update: Workers are processing queries as they arrive. All systems nominal.
  • Time: July 14, 2024, 5:28 p.m.
    Status: Monitoring
    Update: We have restarted the failed workers - queries are back to normal now.
  • Time: July 14, 2024, 5:15 p.m.
    Status: Identified
    Update: Async queries are failing - we are restarting the workers now.
  • Time: July 14, 2024, 4:51 p.m.
    Status: Investigating
    Update: Queries are timing out on EU, we are taking a look into whatโ€™s going on.

Updates:

  • Time: July 9, 2024, 9:56 p.m.
    Status: Resolved
    Update: Planned maintenance is complete.
  • Time: July 9, 2024, 8:10 p.m.
    Status: Identified
    Update: We are doing an upgrade of our data processing infrastructure in the EU region. There will be temporary processing delays. No data has been lost and the system should be caught up shortly.

Updates:

  • Time: July 11, 2024, 2:36 p.m.
    Status: Resolved
    Update: This incident has been resolved.
  • Time: July 10, 2024, 1:23 p.m.
    Status: Monitoring
    Update: We've downgraded this and marked ingestion as operational now that we have duplicate ingestion infarstructure Replay is working normally and we are continuing to process the delayed recordings
  • Time: July 10, 2024, 11:19 a.m.
    Status: Monitoring
    Update: We've duplicated our ingestion infrastructure so that we can protect current recordings from the delay. you should no longer see delay on ingestion of current recordings we'll continue to ingest the delayed recordings in the background
  • Time: July 10, 2024, 9:25 a.m.
    Status: Monitoring
    Update: We're continuing to work to increase ingestion throughput Sorry for the continued interruption
  • Time: July 9, 2024, 2 p.m.
    Status: Monitoring
    Update: We're continuing to slowly catch up with ingestion. We're being a little cautious as we don't want to overwhelm kafka while we're making solid process. Appreciate delays like this are super frustrating and we're really grateful for your patience ๐Ÿ™
  • Time: July 9, 2024, 5:56 a.m.
    Status: Monitoring
    Update: We've continued to monitor ingestion overnight. Some kafka partitions are completely caught up, so some people won't experience any delay. Unfortunately others are still lagging and so you will still see delayed availability of recordings really sorry for the continued interruption!
  • Time: July 8, 2024, 6:13 p.m.
    Status: Monitoring
    Update: We're continuing to monitor recovery, apologies for the delay!
  • Time: July 8, 2024, 2:05 p.m.
    Status: Monitoring
    Update: We've confirmed that the config rollback has resolved the problem, but we've kept ingestion throttled to ensure systems can recover. We're slowly increasing ingestion rate to allow recovery and will keep monitoring Sorry for the interruption
  • Time: July 8, 2024, 11:31 a.m.
    Status: Identified
    Update: A recent config change has unexpectedly impacted processing speed during ingestion of recordings The change has been rolled back and we're monitoring for recovery

Check the status of similar companies and alternatives to PostHog

Gainsight
Gainsight

Systems Active

Glia
Glia

Systems Active

Gorgias
Gorgias

Systems Active

observeai
observeai

Systems Active

Playvox
Playvox

Systems Active

Help Scout
Help Scout

Systems Active

Experience
Experience

Systems Active

Totango
Totango

Systems Active

emnify
emnify

Systems Active

Spiceworks
Spiceworks

Systems Active

Aloware
Aloware

Systems Active

Close
Close

Systems Active

Frequently Asked Questions - PostHog

Is there a PostHog outage?
The current status of PostHog is: Systems Active
Where can I find the official status page of PostHog?
The official status page for PostHog is here
How can I get notified if PostHog is down or experiencing an outage?
To get notified of any status changes to PostHog, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of PostHog every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does PostHog do?
PostHog is developing an open source Product OS, which is a first of its kind in the world.