-
Time: Aug. 31, 2019, 8:45 p.m.
Status: Resolved
Update: Amazon has reported as of 1:30 PDT that the issue causing latency has been fully resolved.
-
Time: Aug. 31, 2019, 8:45 p.m.
Status: Resolved
Update: Amazon has reported as of 1:30 PDT that the issue causing latency has been fully resolved.
-
Time: Aug. 31, 2019, 5:52 p.m.
Status: Monitoring
Update: Most current information from Amazon:
10:47 AM PDT We want to give you more information on progress at this point, and what we know about the event. At 4:33 AM PDT one of 10 datacenters in one of the 6 Availability Zones in the US-EAST-1 Region saw a failure of utility power. Backup generators came online immediately, but for reasons we are still investigating, began quickly failing at around 6:00 AM PDT. This resulted in 7.5% of all instances in that Availability Zone failing by 6:10 AM PDT. Over the last few hours we have recovered most instances but still have 1.5% of the instances in that Availability Zone remaining to be recovered. Similar impact existed to EBS and we continue to recover volumes within EBS. New instance launches in this zone continue to work without issue.
Will continue to monitor and will update this page when Amazon is fully recovered.
-
Time: Aug. 31, 2019, 5:52 p.m.
Status: Monitoring
Update: Most current information from Amazon:
10:47 AM PDT We want to give you more information on progress at this point, and what we know about the event. At 4:33 AM PDT one of 10 datacenters in one of the 6 Availability Zones in the US-EAST-1 Region saw a failure of utility power. Backup generators came online immediately, but for reasons we are still investigating, began quickly failing at around 6:00 AM PDT. This resulted in 7.5% of all instances in that Availability Zone failing by 6:10 AM PDT. Over the last few hours we have recovered most instances but still have 1.5% of the instances in that Availability Zone remaining to be recovered. Similar impact existed to EBS and we continue to recover volumes within EBS. New instance launches in this zone continue to work without issue.
Will continue to monitor and will update this page when Amazon is fully recovered.
-
Time: Aug. 31, 2019, 3:18 p.m.
Status: Monitoring
Update: The MyCase service experienced some intermittent latency due to a reported issue with Amazon EC2 in the US-East 1 region. Will will continue to monitor and will update this page when Amazon reports full recovery.
From Amazon's status page (https://status.aws.amazon.com/):
6:22 AM PDT We are investigating connectivity issues affecting some instances in a single Availability Zone in the US-EAST-1 Region.
6:54 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. Some EC2 APIs are also experiencing increased error rates and latencies. We are working to resolve the issue.
7:37 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. We are investigating increased error rates for new launches within the same Availability Zone. We are working to resolve the issue.
8:06 AM PDT We are starting to see recovery for instance impairments and degraded EBS volume performance within a single Availability Zone in the US-EAST-1 Region. We are also starting to see recovery of EC2 APIs. We continue to work towards recovery for all affected EC2 instances and EBS volumes.
-
Time: Aug. 31, 2019, 3:18 p.m.
Status: Monitoring
Update: The MyCase service experienced some intermittent latency due to a reported issue with Amazon EC2 in the US-East 1 region. Will will continue to monitor and will update this page when Amazon reports full recovery.
From Amazon's status page (https://status.aws.amazon.com/):
6:22 AM PDT We are investigating connectivity issues affecting some instances in a single Availability Zone in the US-EAST-1 Region.
6:54 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. Some EC2 APIs are also experiencing increased error rates and latencies. We are working to resolve the issue.
7:37 AM PDT We can confirm that some instances are impaired and some EBS volumes are experiencing degraded performance within a single Availability Zone in the US-EAST-1 Region. We are investigating increased error rates for new launches within the same Availability Zone. We are working to resolve the issue.
8:06 AM PDT We are starting to see recovery for instance impairments and degraded EBS volume performance within a single Availability Zone in the US-EAST-1 Region. We are also starting to see recovery of EC2 APIs. We continue to work towards recovery for all affected EC2 instances and EBS volumes.
-
Time: Aug. 31, 2019, 2:38 p.m.
Status: Investigating
Update: MyCase is currently experiencing degraded performance of certain features within the system resulting in slowness. Our Development Team is aware of this issue and working as quickly as possible to resolve it.
Please check back on this page, or subscribe to updates, and we will provide an update shortly.
-
Time: Aug. 31, 2019, 2:38 p.m.
Status: Investigating
Update: MyCase is currently experiencing degraded performance of certain features within the system resulting in slowness. Our Development Team is aware of this issue and working as quickly as possible to resolve it.
Please check back on this page, or subscribe to updates, and we will provide an update shortly.