Last checked: 7 minutes ago
Get notified about any outages, downtime or incidents for AssemblyAI and 1800+ other cloud vendors. Monitor 10 companies, for free.
Outage and incident data over the last 30 days for AssemblyAI.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!
Sign Up NowOutlogger tracks the status of these components for Xero:
Component | Status |
---|---|
Billing | Active |
APIs | Active |
Asynchronous API | Active |
LeMUR | Active |
Realtime API | Active |
AWS | Active |
Compute Instances | Active |
Container Control Plane | Active |
Container Registry | Active |
Database | Active |
Load Balancers | Active |
Object Storage | Active |
Transcription Queue | Active |
Usage Statistics Bus | Active |
Usage Statistics Database | Active |
Web | Active |
Dashboard | Active |
Playground | Active |
Website | Active |
View the latest incidents for AssemblyAI and check for official updates:
Description: Our team has identified the root cause of the issues that were affecting Real-Time earlier and have deployed a fix to address them. Users are once again able to initiate Real-Time sessions using temporary tokens and the FinalTranscript text will now contain the expected punctuation.
Status: Resolved
Impact: Major | Started At: Sept. 14, 2023, 2:57 p.m.
Description: We wanted to reach back out to share more detailed information on the incidents that occurred on 8/7 and 8/8. These incidents were caused by separate issues See the information below for a description of each issue and the steps taken to remedy them. 8/7 Incident Cause An inefficient database usage pattern change was submitted and deployed on 8/2. Although inefficient, due to the standard load prior to deployment, no regression was detected. We encountered a new peak load on 8/7 which along with this inefficiency led to a large increase in latency \(turnaround times\) which was the incident faced this day. Resolution We identified and reverted the database usage change committed on 8/2 that led to this slowdown. We upgraded our database instance size. 8/8 Incident Cause A full table query was run against our write replica database as a team worked to transfer data to BigQuery for business intelligence tooling. This led to database contention and slowed down our production service. Resolution We implemented more fine-grained controls and roles for database access along with an approval process to verify production database queries are run against the correct replica and will not impact customers. If you have any questions about this information feel free to reach out to [[email protected]](mailto:[email protected]).
Status: Postmortem
Impact: Minor | Started At: Aug. 8, 2023, 7:56 p.m.
Description: We wanted to reach back out to share more detailed information on the incidents that occurred on 8/7 and 8/8. These incidents were caused by separate issues See the information below for a description of each issue and the steps taken to remedy them. 8/7 Incident Cause An inefficient database usage pattern change was submitted and deployed on 8/2. Although inefficient, due to the standard load prior to deployment, no regression was detected. We encountered a new peak load on 8/7 which along with this inefficiency led to a large increase in latency \(turnaround times\) which was the incident faced this day. Resolution We identified and reverted the database usage change committed on 8/2 that led to this slowdown. We upgraded our database instance size. 8/8 Incident Cause A full table query was run against our write replica database as a team worked to transfer data to BigQuery for business intelligence tooling. This led to database contention and slowed down our production service. Resolution We implemented more fine-grained controls and roles for database access along with an approval process to verify production database queries are run against the correct replica and will not impact customers. If you have any questions about this information feel free to reach out to [[email protected]](mailto:[email protected]).
Status: Postmortem
Impact: Minor | Started At: Aug. 8, 2023, 7:56 p.m.
Description: We wanted to reach back out to share more detailed information on the incidents that occurred on 8/7 and 8/8. These incidents were caused by separate issues See the information below for a description of each issue and the steps taken to remedy them. 8/7 Incident Cause An inefficient database usage pattern change was submitted and deployed on 8/2. Although inefficient, due to the standard load prior to deployment, no regression was detected. We encountered a new peak load on 8/7 which along with this inefficiency led to a large increase in latency \(turnaround times\) which was the incident faced this day. Resolution We identified and reverted the database usage change committed on 8/2 that led to this slowdown. We upgraded our database instance size. 8/8 Incident Cause A full table query was run against our write replica database as a team worked to transfer data to BigQuery for business intelligence tooling. This led to database contention and slowed down our production service. Resolution We implemented more fine-grained controls and roles for database access along with an approval process to verify production database queries are run against the correct replica and will not impact customers. If you have any questions about this information feel free to reach out to [email protected].
Status: Postmortem
Impact: Minor | Started At: Aug. 7, 2023, 7:58 p.m.
Description: We wanted to reach back out to share more detailed information on the incidents that occurred on 8/7 and 8/8. These incidents were caused by separate issues See the information below for a description of each issue and the steps taken to remedy them. 8/7 Incident Cause An inefficient database usage pattern change was submitted and deployed on 8/2. Although inefficient, due to the standard load prior to deployment, no regression was detected. We encountered a new peak load on 8/7 which along with this inefficiency led to a large increase in latency \(turnaround times\) which was the incident faced this day. Resolution We identified and reverted the database usage change committed on 8/2 that led to this slowdown. We upgraded our database instance size. 8/8 Incident Cause A full table query was run against our write replica database as a team worked to transfer data to BigQuery for business intelligence tooling. This led to database contention and slowed down our production service. Resolution We implemented more fine-grained controls and roles for database access along with an approval process to verify production database queries are run against the correct replica and will not impact customers. If you have any questions about this information feel free to reach out to [email protected].
Status: Postmortem
Impact: Minor | Started At: Aug. 7, 2023, 7:58 p.m.
Join OutLogger to be notified when any of your vendors or the components you use experience an outage or down time. Join for free - no credit card required.