[go: up one dir, main page]

All Systems Operational

PostHog.com ? Operational
90 days ago
100.0 % uptime
Today
US Cloud πŸ‡ΊπŸ‡Έ Operational
90 days ago
99.97 % uptime
Today
App ? Operational
90 days ago
99.85 % uptime
Today
Event and Data Ingestion Success ? Operational
90 days ago
100.0 % uptime
Today
Event and Data Ingestion Lag ? Operational
90 days ago
99.99 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
99.98 % uptime
Today
Session Replay Ingestion Operational
90 days ago
99.97 % uptime
Today
Destinations ? Operational
90 days ago
99.98 % uptime
Today
API /query Endpoint Operational
90 days ago
100.0 % uptime
Today
EU Cloud πŸ‡ͺπŸ‡Ί Operational
90 days ago
99.99 % uptime
Today
App ? Operational
90 days ago
99.98 % uptime
Today
Event and Data Ingestion Success ? Operational
90 days ago
99.99 % uptime
Today
Event and Data Ingestion Lag ? Operational
90 days ago
100.0 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
100.0 % uptime
Today
Session Replay Ingestion Operational
90 days ago
100.0 % uptime
Today
Destinations ? Operational
90 days ago
99.98 % uptime
Today
API /query Endpoint Operational
90 days ago
100.0 % uptime
Today
Support APIs Operational
90 days ago
100.0 % uptime
Today
Update Service Operational
90 days ago
100.0 % uptime
Today
License Server Operational
90 days ago
100.0 % uptime
Today
AWS US πŸ‡ΊπŸ‡Έ Operational
AWS ec2-us-east-1 Operational
AWS elb-us-east-1 Operational
AWS rds-us-east-1 Operational
AWS elasticache-us-east-1 Operational
AWS kafka-us-east-1 Operational
AWS EU πŸ‡ͺπŸ‡Ί Operational
AWS elb-eu-central-1 Operational
AWS elasticache-eu-central-1 Operational
AWS rds-eu-central-1 Operational
AWS ec2-eu-central-1 Operational
AWS kafka-eu-central-1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
US Ingestion End to End Time ?
Fetching
US Decide Endpoint Response Time
Fetching
US App Response Time
Fetching
US Event/Data Ingestion Response Time
Fetching
EU Ingestion End to End Time ?
Fetching
EU App Response Time
Fetching
EU Decide Endpoint Response Time
Fetching
EU Event/Data Ingestion Endpoint Response Time
Fetching
Oct 19, 2025
Resolved - We have fully restored the system and processed the delayed events.
Oct 19, 04:48 UTC
Update - We have resumed ingesting events and we're working through the accumulated backlog.
Oct 18, 22:42 UTC
Identified - We found an issue that throttled our database writes, we have pinpointed the issue and are now in the process of recovering the database. We expect to have all data caught up in the next hours
Oct 18, 20:02 UTC
Investigating - Our data processing infrastructure is running behind which is causing inaccuracies in the reporting tools. No data has been lost and the system should be caught up shortly.
Oct 18, 14:42 UTC
Oct 18, 2025
Oct 17, 2025
Resolved - The issue has been resolved and migrations have been applied to repair any affected cohorts.
Oct 17, 07:32 UTC
Identified - We've identified an issue which has resulted in the failure to add new members to a static cohort beginning Oct. 10. Cohorts affected by this issue may display the newly added persons in the app, but will not be considered members of the cohort for any calculations.

This issue has since been resolved, and new members can once again be added to static cohorts. Members added during this period will be reinstated automatically soon. If needed, a new cohort can be created in the interim.

Oct 16, 20:14 UTC
Investigating - We've identified a bug in our cohort management systems, which has causes recently created cohorts to be in an invalid state. We're working on a fix, and will backfill any invalid data once it's in place.
Oct 16, 19:20 UTC
Oct 16, 2025
Completed - Everything is done, thanks for your patience!
Oct 16, 13:41 UTC
Verifying - Maintenance work is done, we verified that the systems picked up data processing and all is well.

Delays in processing are going down. We are monitoring.

Oct 16, 13:01 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 16, 12:00 UTC
Scheduled - We will perform a routine maintenance on PostHog Cloud US, which will delay event ingestion and destinations. No data will be lost during this operation.
Oct 14, 08:56 UTC
Completed - The scheduled maintenance has been completed.
Oct 16, 10:00 UTC
Verifying - Maintenance work is done, we've veriified that we're good and now monitoring. The lag should have caught up for most of you and will be consumed entirely very soon.
Oct 16, 08:59 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 16, 08:00 UTC
Scheduled - We will perform a routine maintenance on PostHog Cloud EU, which will delay event ingestion and destinations. No data will be lost during this operation.
Oct 14, 08:54 UTC
Oct 15, 2025
Resolved - All traffic has been stable during our monitoring period, and we're resolving this issue. Thank you for your patience and apologies for the disruption.
Oct 15, 20:58 UTC
Monitoring - An update to our ingress control plane has been rolled back and we're monitoring all traffic flowing into PostHog
Oct 15, 20:31 UTC
Identified - We have identified the issue and are rolling out the fix.
Oct 15, 20:23 UTC
Investigating - We're experiencing an elevated level of errors and are currently looking into the issue.
Oct 15, 20:15 UTC
Resolved - The maintenance has completed and processing delays have caught up.
Oct 15, 00:04 UTC
Monitoring - Operators are performing scheduled database maintenance on in the EU region.
During the maintenance window, delays are expected to both event processing and realtime destinations.

Oct 14, 22:04 UTC
Oct 14, 2025
Oct 13, 2025
Resolved - We have released the fix and the mentioned queries are now scanning the expected amount of data.

Cluster load is back to normal and queries are responding as expected.

Oct 13, 20:24 UTC
Update - We have applied some configurations to the cluster that have allowed to limit the load.

We are already testing the fix for the queries and will release it soon.

The cluster is more responsive now and load seems better, queries are responding better but there could still be some periods in which they are not as responsive.

Oct 13, 15:25 UTC
Identified - We have spotted an issue by which some queries are scanning way more data than they should and are making our cluster struggle with performance.

We are working on a fix.

Oct 13, 12:55 UTC
Oct 12, 2025

No incidents reported.

Oct 11, 2025

No incidents reported.

Oct 10, 2025
Resolved - We're in the clear, thanks for your patience and have an awesome day.
Oct 10, 08:32 UTC
Update - We're catching up lag and are cleaning things up. We will post more updates if necessary and resolve once we're completely done.
Oct 10, 08:12 UTC
Update - We're still scaling up step by step. we're catching up the lag and are monitoring.
Oct 10, 07:36 UTC
Update - We're slowly scaling up data ingestion and are monitoring
Oct 10, 07:13 UTC
Update - The maintenance is still ongoing, we're updating once there are new information to share.
Oct 10, 06:46 UTC
Monitoring - We are running scheduled database maintenance. We expect there to be delays to event processing and event delivery.
This includes delays in realtime destinations.

Oct 10, 04:16 UTC
Oct 9, 2025

No incidents reported.

Oct 8, 2025
Resolved - We think the impact from the issue is over.
Oct 8, 12:07 UTC
Update - After monitoring the behaviour for a while we don't see the issue coming up again. We are back to normal.
Oct 8, 07:10 UTC
Monitoring - We deployed a fix that should prevent this problem from happening in the future.
Oct 7, 14:27 UTC
Update - We saw pods being stuck and redeployed. This fixed the issue temporarily. We are still investigating the root cause.
Oct 7, 08:01 UTC
Investigating - Our data processing infrastructure is running behind which is causing delays in some types of CDP destinations. No data has been lost and the system should be caught up shortly.
Oct 7, 07:05 UTC
Oct 7, 2025
Resolved - This incident has been resolved.
Oct 7, 08:27 UTC
Investigating - We've observed elevated rates of errors loading experiment results. We're working to diagnose the issue now. No data has been lost.
Oct 7, 07:18 UTC
Oct 6, 2025
Resolved - We are fully caught up on legacy CDP destination deliveries
Oct 6, 19:50 UTC
Monitoring - Lag is decreasing and should be fully recovered soon
Oct 6, 19:05 UTC
Investigating - Our data processing infrastructure is running behind which is causing delays in some types of CDP destinations. No data has been lost and the system should be caught up shortly.
Oct 6, 18:07 UTC
Resolved - This incident has been resolved.
Oct 6, 16:14 UTC
Update - We have confirmed that the fix has resolved the issue in the latest version of the SDK.

Exceptions caught as a result of the bug in our SDK will not contribute towards your usage.

Oct 3, 22:27 UTC
Monitoring - A fix has been released and we are now monitoring the situation.

If required, the latest posthog-js version can be found at https://github.com/PostHog/posthog-js/releases/tag/posthog-js%401.270.1 or https://www.npmjs.com/package/posthog-js/v/1.270.1

Oct 3, 17:04 UTC
Identified - We've identified a bug in a recently shipped version of our javascript web SDK, around our surveys product. This seems to have had minimal impact on user site functionality, but is resulting in hugely inflated rates of exception capture for teams using our error tracking product. We're currently working to release a fix, and identifying impacted users to refund any exceptions captured as a result of this error.

If you see elevated rates of exceptions captured with:
Type: TypeError
Message: e.persistence.isDisabled is not a function

and are billed for these exceptions, please contact support, with product are "Error Tracking".

Oct 3, 16:33 UTC
Oct 5, 2025

No incidents reported.