Company Logo

Is there an Pagely outage?

Pagely status: Systems Active

Last checked: 8 minutes ago

Get notified about any outages, downtime or incidents for Pagely and 1800+ other cloud vendors. Monitor 10 companies, for free.

Subscribe for updates

Pagely outages and incidents

Outage and incident data over the last 30 days for Pagely.

There have been 0 outages or incidents for Pagely in the last 30 days.

Severity Breakdown:

Tired of searching for status updates?

Join OutLogger to be notified when any of your vendors or the components you use experience an outage. It's completely free and takes less than 2 minutes!

Sign Up Now

Components and Services Monitored for Pagely

Outlogger tracks the status of these components for Xero:

API Active
Atomic Control Panel Active
Automation Active
CI/CD Integrations Active
Live Chat Active
PressDNS Active
Shared Hosting Infrastructure Active
Site Backup Downloads Active
Slack Support Channels Active
Support System Active
VPS Hosting Infrastructure Active
Component Status
API Active
Atomic Control Panel Active
Automation Active
CI/CD Integrations Active
Live Chat Active
PressDNS Active
Shared Hosting Infrastructure Active
Site Backup Downloads Active
Slack Support Channels Active
Support System Active
VPS Hosting Infrastructure Active

Latest Pagely outages and incidents.

View the latest incidents for Pagely and check for official updates:

Updates:

  • Time: Feb. 15, 2022, 7:56 p.m.
    Status: Resolved
    Update: Cloudflare is now reporting "All Systems Operational."
  • Time: Feb. 15, 2022, 7:39 p.m.
    Status: Identified
    Update: Since approximately 17:29 UTC or before, Cloudflare has been experiencing network congestion. If you are routing your Pagely traffic through Cloudflare, you may be experiencing Cloudflare error response codes like 520 or 524. Cloudflare has been updating the status of this incident on their end at https://www.cloudflarestatus.com/ Pagely hosting itself is not directly affected.

Updates:

  • Time: Jan. 7, 2022, 6:44 p.m.
    Status: Resolved
    Update: Upgrades are now complete.
  • Time: Jan. 6, 2022, 11:50 p.m.
    Status: Monitoring
    Update: The Pagely team has already begun rolling out this patch for all customers. If you have a version hold request on file, we will patch your site while keeping it on the same major branch version.

Updates:

  • Time: Jan. 7, 2022, 6:44 p.m.
    Status: Resolved
    Update: Upgrades are now complete.
  • Time: Jan. 6, 2022, 11:50 p.m.
    Status: Monitoring
    Update: The Pagely team has already begun rolling out this patch for all customers. If you have a version hold request on file, we will patch your site while keeping it on the same major branch version.

Updates:

  • Time: Dec. 23, 2021, 12:36 a.m.
    Status: Resolved
    Update: Update from Pagely: While we will continue monitoring for any issues, this issue is now resolved. --- Update from AWS: [4:22 PM PST] Starting at 4:11 AM PST some EC2 instances and EBS volumes experienced a loss of power in a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. Instances in other data centers within the affected Availability Zone, and other Availability Zones within the US-EAST-1 Region were not affected by this event. At 4:55 AM PDT, power was restored to EC2 instances and EBS volumes in the affected data center, which allowed the majority of EC2 instances and EBS volumes to recover. However, due to the nature of the power event, some of the underlying hardware experienced failures, which needed to be resolved by engineers within the facility. Engineers worked to recover the remaining EC2 instances and EBS volumes affected by the issue. By 2:30 PM PST, we recovered the vast majority of EC2 instances and EBS volumes. However, some of the affected EC2 instances and EBS volumes were running on hardware that has been affected by the loss of power and is not recoverable. For customers still waiting for recovery of a specific EC2 instance or EBS volume, we recommend that you relaunch the instance or recreate the volume from a snapshot for full recovery.
  • Time: Dec. 22, 2021, 11:55 p.m.
    Status: Monitoring
    Update: Update from Pagely: At this time our engineering team is not observing any issues with our customer's servers, databases, or sites. Our team will continue monitoring for any issues and providing relevant updates as they become available. --- Latest Updates from AWS: [12:03 PM PST] Over the last hour, after addressing many of the underlying hardware failures, we have seen an accelerated rate of recovery for the affected EC2 instances and EBS volumes. We continue to work on addressing the underlying hardware failures that are preventing the remaining EC2 instances and EBS volumes. For customers that continue to have EC2 instance or EBS volume impairments, relaunching affected EC2 instances or recreating affecting EBS volumes within the affected Availability Zone, continues to be a faster path to full recovery. [1:39 PM PST] We continue to make progress in addressing the hardware failures that are delaying recovery of the remaining EC2 instances and EBS volumes. At this stage, if you are still waiting for an EC2 instance or EBS volume to fully recover, we would strongly recommend that you consider relaunching the EC2 instance or recreating the EBS volume from a snapshot. As is often the case with a loss of power, there may be some hardware that is not recoverable, which will prevent us from fully recovering the affected EC2 instances and EBS volumes. We are not quite at that point yet in terms of recovery, but it is unlikely that we will recover all of the small number of remaining EC2 instances and EBS volumes. If you need help in launching new EC2 instances or recreating EBS volumes, please reach out to AWS Support. [3:13 PM PST] Since the last update, we have more than halved the number of affected EC2 instances and EBS volumes and continue to work on the remaining EC2 instances and EBS volumes. The remaining EC2 instances and EBS volumes have all experienced underlying hardware failures due to the nature of the initial power event, which we are working to resolve. We expect to make further progress on this list within the next hour, but some of the remaining EC2 instances and EBS volumes may not be recoverable due to hardware failures. If you have the ability to relaunch an affected EC2 instance or recreate an affected EBS volume from snapshot, we continue to strongly recommend that you take that path.
  • Time: Dec. 22, 2021, 7:42 p.m.
    Status: Monitoring
    Update: Latest update from AWS: [11:08 AM PST] We continue to make progress in restoring power and connectivity to the remaining EC2 instances and EBS volumes, although recovery of the remaining instances and volumes is taking longer than expected. We believe this is related to the way in which the data center lost power, which has led to failures in the underlying hardware that we are working to recover. While EC2 instances and EBS volumes that have recovered continue to operate normally within the affected data center, we are working to replace hardware components for the recovery of the remaining EC2 instances and EBS volumes. We have multiple engineers working on the underlying hardware failures and expect to see recovery over the next few hours. As is often the case with a loss of power, there may be some hardware that is not recoverable, and so we continue to recommend that you relaunch your EC2 instance, or recreate you EBS volume from a snapshot, if you are able to do so.
  • Time: Dec. 22, 2021, 6:58 p.m.
    Status: Monitoring
    Update: Latest update from AWS: [9:28 AM PST] We continue to make progress in restoring connectivity to the remaining EC2 instances and EBS volumes. In the last hour, we have restored underlying connectivity to the majority of the remaining EC2 instance and EBS volumes, but are now working through full recovery at the host level. The majority of affected AWS services remain in recovery and we have seen recovery for the majority of single-AZ RDS databases that were affected by the event. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. Note that restarting an instance at this stage will not help as a restart does not change the underlying hardware. We continue to work towards full recovery.
  • Time: Dec. 22, 2021, 4:30 p.m.
    Status: Monitoring
    Update: Almost all customer servers should be fully recovered at this time. Some high availability clusters are still running in impaired mode but that doesn't affect the availability of the sites at the moment.
  • Time: Dec. 22, 2021, 3:01 p.m.
    Status: Identified
    Update: Latest update from AWS: [6:51 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. For the remaining EC2 instances, we are experiencing some network connectivity issues, which is slowing down full recovery. We believe we understand why this is the case and are working on a resolution. Once resolved, we expect to see faster recovery for the remaining EC2 instances and EBS volumes. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. Note that restarting an instance at this stage will not help as a restart does not change the underlying hardware. We have a small number of affected EBS volumes that are still experiencing degraded IO performance that we are working to recover. The majority of AWS services have also recovered, but services which host endpoints within the customer’s VPCs - such as single-AZ RDS databases, ElasticCache, Redshift, etc. - continue to see some impact as we work towards full recovery.
  • Time: Dec. 22, 2021, 3:01 p.m.
    Status: Identified
    Update: We are continuing to work on a fix for this issue.
  • Time: Dec. 22, 2021, 2:01 p.m.
    Status: Identified
    Update: Update from Pagely: We are starting to see signs of recovery and have restored a portion of the affected servers. Some servers and RDS instances are still unavailable so we're working towards restoring those. Latest update from AWS: [5:39 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. Network connectivity within the affected Availability Zone has also returned to normal levels. While all services are starting to see meaningful recovery, services which were hosting endpoints within the affected data center - such as single-AZ RDS databases, ElastiCache, etc. - would have seen impact during the event, but are starting to see recovery now. Given the level of recovery, if you have not yet failed away from the affected Availability Zone, you should be starting to see recovery at this stage.
  • Time: Dec. 22, 2021, 1:31 p.m.
    Status: Investigating
    Update: The issue is also affecting our Atomic dashboard, which is currently unavailable or returning errors intermittently.
  • Time: Dec. 22, 2021, 1:26 p.m.
    Status: Investigating
    Update: Latest update from AWS: [5:18 AM PST] We continue to make progress in restoring power to the affected data center within the affected Availability Zone (USE1-AZ4) in the US-EAST-1 Region. We have now restored power to the majority of instances and networking devices within the affected data center and are starting to see some early signs of recovery. Customers experiencing connectivity or instance availability issues within the affected Availability Zone, should start to see some recovery as power is restored to the affected data center. RunInstances API error rates are returning to normal levels and we are working to recover affected EC2 instances and EBS volumes. While we would expect continued improvement over the coming hour, we would still recommend failing away from the Availability Zone if you are able to do so to mitigate this issue.
  • Time: Dec. 22, 2021, 1:05 p.m.
    Status: Investigating
    Update: Latest update from AWS: [5:01 AM PST] We can confirm a loss of power within a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. This is affecting availability and connectivity to EC2 instances that are part of the affected data center within the affected Availability Zone. We are also experiencing elevated RunInstance API error rates for launches within the affected Availability Zone. Connectivity and power to other data centers within the affected Availability Zone, or other Availability Zones within the US-EAST-1 Region are not affected by this issue, but we would recommend failing away from the affected Availability Zone (USE1-AZ4) if you are able to do so. We continue to work to address the issue and restore power within the affected data center.
  • Time: Dec. 22, 2021, 12:53 p.m.
    Status: Investigating
    Update: Update from AWS: [4:35 AM PST] We are investigating increased EC2 launch failures and networking connectivity issues for some instances in a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. Other Availability Zones within the US-EAST-1 Region are not affected by this issue.
  • Time: Dec. 22, 2021, 12:32 p.m.
    Status: Investigating
    Update: An issue with AWS in US-EAST-1 (Virginia) region is currently causing an outage for a large number of servers in the region.

Updates:

  • Time: Dec. 23, 2021, 12:36 a.m.
    Status: Resolved
    Update: Update from Pagely: While we will continue monitoring for any issues, this issue is now resolved. --- Update from AWS: [4:22 PM PST] Starting at 4:11 AM PST some EC2 instances and EBS volumes experienced a loss of power in a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. Instances in other data centers within the affected Availability Zone, and other Availability Zones within the US-EAST-1 Region were not affected by this event. At 4:55 AM PDT, power was restored to EC2 instances and EBS volumes in the affected data center, which allowed the majority of EC2 instances and EBS volumes to recover. However, due to the nature of the power event, some of the underlying hardware experienced failures, which needed to be resolved by engineers within the facility. Engineers worked to recover the remaining EC2 instances and EBS volumes affected by the issue. By 2:30 PM PST, we recovered the vast majority of EC2 instances and EBS volumes. However, some of the affected EC2 instances and EBS volumes were running on hardware that has been affected by the loss of power and is not recoverable. For customers still waiting for recovery of a specific EC2 instance or EBS volume, we recommend that you relaunch the instance or recreate the volume from a snapshot for full recovery.
  • Time: Dec. 22, 2021, 11:55 p.m.
    Status: Monitoring
    Update: Update from Pagely: At this time our engineering team is not observing any issues with our customer's servers, databases, or sites. Our team will continue monitoring for any issues and providing relevant updates as they become available. --- Latest Updates from AWS: [12:03 PM PST] Over the last hour, after addressing many of the underlying hardware failures, we have seen an accelerated rate of recovery for the affected EC2 instances and EBS volumes. We continue to work on addressing the underlying hardware failures that are preventing the remaining EC2 instances and EBS volumes. For customers that continue to have EC2 instance or EBS volume impairments, relaunching affected EC2 instances or recreating affecting EBS volumes within the affected Availability Zone, continues to be a faster path to full recovery. [1:39 PM PST] We continue to make progress in addressing the hardware failures that are delaying recovery of the remaining EC2 instances and EBS volumes. At this stage, if you are still waiting for an EC2 instance or EBS volume to fully recover, we would strongly recommend that you consider relaunching the EC2 instance or recreating the EBS volume from a snapshot. As is often the case with a loss of power, there may be some hardware that is not recoverable, which will prevent us from fully recovering the affected EC2 instances and EBS volumes. We are not quite at that point yet in terms of recovery, but it is unlikely that we will recover all of the small number of remaining EC2 instances and EBS volumes. If you need help in launching new EC2 instances or recreating EBS volumes, please reach out to AWS Support. [3:13 PM PST] Since the last update, we have more than halved the number of affected EC2 instances and EBS volumes and continue to work on the remaining EC2 instances and EBS volumes. The remaining EC2 instances and EBS volumes have all experienced underlying hardware failures due to the nature of the initial power event, which we are working to resolve. We expect to make further progress on this list within the next hour, but some of the remaining EC2 instances and EBS volumes may not be recoverable due to hardware failures. If you have the ability to relaunch an affected EC2 instance or recreate an affected EBS volume from snapshot, we continue to strongly recommend that you take that path.
  • Time: Dec. 22, 2021, 7:42 p.m.
    Status: Monitoring
    Update: Latest update from AWS: [11:08 AM PST] We continue to make progress in restoring power and connectivity to the remaining EC2 instances and EBS volumes, although recovery of the remaining instances and volumes is taking longer than expected. We believe this is related to the way in which the data center lost power, which has led to failures in the underlying hardware that we are working to recover. While EC2 instances and EBS volumes that have recovered continue to operate normally within the affected data center, we are working to replace hardware components for the recovery of the remaining EC2 instances and EBS volumes. We have multiple engineers working on the underlying hardware failures and expect to see recovery over the next few hours. As is often the case with a loss of power, there may be some hardware that is not recoverable, and so we continue to recommend that you relaunch your EC2 instance, or recreate you EBS volume from a snapshot, if you are able to do so.
  • Time: Dec. 22, 2021, 6:58 p.m.
    Status: Monitoring
    Update: Latest update from AWS: [9:28 AM PST] We continue to make progress in restoring connectivity to the remaining EC2 instances and EBS volumes. In the last hour, we have restored underlying connectivity to the majority of the remaining EC2 instance and EBS volumes, but are now working through full recovery at the host level. The majority of affected AWS services remain in recovery and we have seen recovery for the majority of single-AZ RDS databases that were affected by the event. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. Note that restarting an instance at this stage will not help as a restart does not change the underlying hardware. We continue to work towards full recovery.
  • Time: Dec. 22, 2021, 4:30 p.m.
    Status: Monitoring
    Update: Almost all customer servers should be fully recovered at this time. Some high availability clusters are still running in impaired mode but that doesn't affect the availability of the sites at the moment.
  • Time: Dec. 22, 2021, 3:01 p.m.
    Status: Identified
    Update: Latest update from AWS: [6:51 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. For the remaining EC2 instances, we are experiencing some network connectivity issues, which is slowing down full recovery. We believe we understand why this is the case and are working on a resolution. Once resolved, we expect to see faster recovery for the remaining EC2 instances and EBS volumes. If you are able to relaunch affected EC2 instances within the affected Availability Zone, that may help to speed up recovery. Note that restarting an instance at this stage will not help as a restart does not change the underlying hardware. We have a small number of affected EBS volumes that are still experiencing degraded IO performance that we are working to recover. The majority of AWS services have also recovered, but services which host endpoints within the customer’s VPCs - such as single-AZ RDS databases, ElasticCache, Redshift, etc. - continue to see some impact as we work towards full recovery.
  • Time: Dec. 22, 2021, 3:01 p.m.
    Status: Identified
    Update: We are continuing to work on a fix for this issue.
  • Time: Dec. 22, 2021, 2:01 p.m.
    Status: Identified
    Update: Update from Pagely: We are starting to see signs of recovery and have restored a portion of the affected servers. Some servers and RDS instances are still unavailable so we're working towards restoring those. Latest update from AWS: [5:39 AM PST] We have now restored power to all instances and network devices within the affected data center and are seeing recovery for the majority of EC2 instances and EBS volumes within the affected Availability Zone. Network connectivity within the affected Availability Zone has also returned to normal levels. While all services are starting to see meaningful recovery, services which were hosting endpoints within the affected data center - such as single-AZ RDS databases, ElastiCache, etc. - would have seen impact during the event, but are starting to see recovery now. Given the level of recovery, if you have not yet failed away from the affected Availability Zone, you should be starting to see recovery at this stage.
  • Time: Dec. 22, 2021, 1:31 p.m.
    Status: Investigating
    Update: The issue is also affecting our Atomic dashboard, which is currently unavailable or returning errors intermittently.
  • Time: Dec. 22, 2021, 1:26 p.m.
    Status: Investigating
    Update: Latest update from AWS: [5:18 AM PST] We continue to make progress in restoring power to the affected data center within the affected Availability Zone (USE1-AZ4) in the US-EAST-1 Region. We have now restored power to the majority of instances and networking devices within the affected data center and are starting to see some early signs of recovery. Customers experiencing connectivity or instance availability issues within the affected Availability Zone, should start to see some recovery as power is restored to the affected data center. RunInstances API error rates are returning to normal levels and we are working to recover affected EC2 instances and EBS volumes. While we would expect continued improvement over the coming hour, we would still recommend failing away from the Availability Zone if you are able to do so to mitigate this issue.
  • Time: Dec. 22, 2021, 1:05 p.m.
    Status: Investigating
    Update: Latest update from AWS: [5:01 AM PST] We can confirm a loss of power within a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. This is affecting availability and connectivity to EC2 instances that are part of the affected data center within the affected Availability Zone. We are also experiencing elevated RunInstance API error rates for launches within the affected Availability Zone. Connectivity and power to other data centers within the affected Availability Zone, or other Availability Zones within the US-EAST-1 Region are not affected by this issue, but we would recommend failing away from the affected Availability Zone (USE1-AZ4) if you are able to do so. We continue to work to address the issue and restore power within the affected data center.
  • Time: Dec. 22, 2021, 12:53 p.m.
    Status: Investigating
    Update: Update from AWS: [4:35 AM PST] We are investigating increased EC2 launch failures and networking connectivity issues for some instances in a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. Other Availability Zones within the US-EAST-1 Region are not affected by this issue.
  • Time: Dec. 22, 2021, 12:32 p.m.
    Status: Investigating
    Update: An issue with AWS in US-EAST-1 (Virginia) region is currently causing an outage for a large number of servers in the region.

Check the status of similar companies and alternatives to Pagely

Avalara
Avalara

Systems Active

Crisis Text Line
Crisis Text Line

Issues Detected

Jamf
Jamf

Systems Active

Mulesoft
Mulesoft

Systems Active

Meltwater
Meltwater

Systems Active

HashiCorp
HashiCorp

Systems Active

Datto
Datto

Issues Detected

Vox Media
Vox Media

Systems Active

Cradlepoint
Cradlepoint

Systems Active

Liferay
Liferay

Systems Active

Zapier
Zapier

Systems Active

Workato US
Workato US

Systems Active

Frequently Asked Questions - Pagely

Is there a Pagely outage?
The current status of Pagely is: Systems Active
Where can I find the official status page of Pagely?
The official status page for Pagely is here
How can I get notified if Pagely is down or experiencing an outage?
To get notified of any status changes to Pagely, simply sign up to OutLogger's free monitoring service. OutLogger checks the official status of Pagely every few minutes and will notify you of any changes. You can veiw the status of all your cloud vendors in one dashboard. Sign up here
What does Pagely do?
A Managed WordPress platform was created in the mid-2000s to simplify and make WordPress more accessible. It now serves high-quality solutions to clients.