Maintenance Archives | Pure Energy Systems
04 Oct 2019

Upcoming Maintenance Windows

We’ve recently been notified by one of our data center providers that they need to schedule some emergency maintenance windows in order to apply some critical updates to the underlying physical hardware, and that these updates will require taking some of our servers offline while they perform the work.

While we know that these scheduled events are never ideal, we have <knock on wood> been very lucky in terms of infrastructure outages since moving from our own managed hardware into provider-managed cloud servers.

We’ve been collating the scheduled windows and are currently:

  • notifying all impacted clients via email of their scheduled outage window.
  • adding the impacting events into the Network Status / Scheduled Events section of our client portal.
    • This is particularly cool to us, as it allows us to input all future scheduled events, and clients will only see/be bothered by the events that actually impact them personally.  Yes, we’ve had the ability to do the same via email for some time, but this is just another cool feature of the ‘new’ client portal that we’re really getting to utilize for the first time.

We will continue to monitor the situation and should any further windows be scheduled by the provider, we will of course notify additional impacted users and add those events to the client portal as well.



05 Nov 2018

Upcoming Server Transfers

In just a couple weeks we’re going to schedule some maintenance windows in order to migrate client accounts around to facilitate some server replacements.  Now, I know what you’re thinking:

“Didn’t we just do this four years ago?  Wasn’t moving to the cloud supposed to do away with these hardware refresh cycles?”

It’s true that virtualization and “the cloud” has empowered services such as ours in ways never dreamed off in the days of “a physical server for every need”, but there are a couple caveats and things to watch out for, including two forces that have come into play in our situation:

The Cloud Is Still Built On Actual Hardware

While it’s true that we no longer have to directly touch hardware, our infrastructure is still ultimately tied to physical hardware.  That hardware exists somewhere, and someone has to feed it, care for it, and eventually replace it.  The continual commoditization of PC server level hardware means that the newer stuff is generally faster and cheaper than the stuff from a few years ago.  This leads our providers into interesting situations where they end up wanting to encourage people towards the newer hardware, so they can decommission the older systems.

This is currently happening with us.  A couple months ago I took a phone call that started something like this:

“How’d you like 33% more RAM, 66% more Storage space, and faster, newer generation CPUs, at the very same prices you pay today?”

Now, I’m no fool, so I asked what the catch was.  And I learned: we’d have to migrate our existing servers to their new hardware infrastructure.  The new hardware is in the same data-centers and has all the same connectivity as our existing hardware, but we’d have to migrate over in order to enjoy the additional resources.   Thankfully they offer a “single click” button migration that would take care of everything for us, we just hit the button for a given server, it goes offline, transfers to the new hardware, and spins back up…  about 3 hours later.

Okay, not really the best option in the world, but something for us to consider.  After all, more resources are always a good thing,  we always like getting additional resources for the same price, that means we get to inject more resources into everyone’s hosting plans!  But.. a roughly 3 hour downtime for each server?  That’s kind of a big chunk for us to commit to, even for a significant resource increase, alone.

But there is another aspect to consider…

The Lingering Operating System

Believe it or not, in the last 8 months I have personally laid eyes on a production Red Hat Enterprise Linux 3 server, being used in a very critical and production oriented way at a customer site.  For anyone who doesn’t know, RHEL3 was released in 2004, the last released update was in 2007, and the entire release was announced as End of Life in late 2010, over eight years ago.  The machine in question was stood up circa 2006.  It’s twelve years old, and while it’s a security vulnerability nightmare, it lives on today in all of its 32bit glory.

Why?  Because it’s never needed to be rebuilt.  The company in question was an early adopter of server virtualization.  This rickety old machine was one of their first virtual systems, and it has persisted, and ran proudly, on numerous stacks of underlying hardware over it’s 12 year lifespan.  Virtualization and the flexibility it has brought us has minimized the number of situations that used to lead to a server getting rebuilt from the ground up.  While this is great for uptime and SysAdmin sanity, the dark lining is that it sometimes allows old machines to persist longer than they probably should have.

This story isn’t that unique, we’ve seen countless instances of “It’s still running, so we left it be” over the years, and we’re even guilty of it ourselves.  While we moved to CentOS 7 as our platform of choice shortly after the release CentOS 7.1, we’ve still got a fair amount of CentOS6 still running in our environment today.  While CentOS6 is not scheduled for a full “End of Life” until the end of 2020, we want to get ahead of the curve.


So with the these two datapoints lodged in our minds, we started thinking about the benefits of ‘refreshing’ our existing servers.  We built a list of possible benefits:

  • More resources, same cost.
  • Move everything to a newer Operating System.
  • Additionally, we want to move from the basic CentOS platform over to CloudLinux.  CloudLinux adds in a bunch of features and abilities that will benefit us in terms of server stability and management.
  • Look at retooling our systems to utilize PHP-FPM instead of SuPHP.  (Again, increasing performance for clients!)


And then we looked at the downsides of performance such a “full refresh” and weighed out the options before us:

Do Nothing

  • — No resource upgrade
  • — CentOS6 continues to live on, with a necessary replacement in the next 24 months.
  • +++ Zero work on our part.

Migrate, but don’t “refresh”

  • +++ Resource Upgrade!
  • — CentOS6 continues to live on, replacement still necessary within 24 months.
  • — 3 hour downtime per server
  • -+- Schedule downtime, “push one button” migration process

Build New Servers and Migrate Clients

  • +++ Resource Upgrade!
  • +++ Operating System Upgrade!
  • +++ Much less downtime per client!
  • — Most amount of work required on our part.


So, looking over the three possibilities, it became clear that, well, it makes sense to invest the work and do things right now. (Sorry team!)   Our plan is rather simple:

  1. Build out a new server, in the new hardware environment.
  2. Install everything, get it configured the way we like, test everything out.
  3. Schedule a window to migrate all customers from one “old” server over to the new one.
  4. Repeat steps 1-3 for each server that is getting a refresh.

Now, the only part of this that is impactful to our clients is step (3).    We’ll do all our normal tricks to minimize the downtime (lowering DNS TTLs, etc), but we can’t make the downtime go away entirely.  What we can (and will) do is transfer accounts one at a time, so instead of your website being offline for 3 hours, it will be down for a period of time measured in minutes, based on the size of your individual account.  (We usually lower DNS TTLs to 10 minutes, and most accounts transfer within that period of time).

A few points of interest:

  • We’ll be emailing all the clients on each server in advance of their maintenance window.  Generally we aim for a 5-7 day heads up for something like this.
  • Server names and IPs will be changing.   For cPanel clients who host their DNS with us (your domain is pointed to and, no action will be required on your part in order for your site to perform normally after the migration.  If however you have something out there hard-coded to a specific server IP address, you will need to adjust some things.
  • New server names and IPs will be included in the announcement emails that go out to clients on a server before it is relocated.
  • When your account is relocated, your account information in our portal will be updated at the same time.  So if in doubt, you can always use the links within our client portal to access your cPanel interface.


Our aim is to begin the migrations in the next 10-14 days, and to have the entire project wrapped up before the end of the year.

01 Mar 2015

Upcoming Critical Maintenance / Reboots

Due to some critical software vulnerabilities that are scheduled to be disclosed to the public on March 10th, there are maintenance windows that we need to schedule and complete in order to patch/update affected systems prior to the announcement being made public.

At this time only two customer-impacting servers are known to be affected, and our datacenter provider has scheduled maintenance windows for those as follows:

* – March 7th, 2015 starting at 03:00 PM Eastern Time
* – March 8th, 2015 started at 02:00 PM Eastern Time

We are continuing to monitor this situation and are working with all of our datacenter providers to determine if any other systems are impacted by the vulnerability, and will update if we discover other systems that will need to be scheduled for maintenance windows.

21 Apr 2014

Server Refresh Update

At this time we are happy to announce that all PES backend systems (nameservers, our own website, backend management systems, etc) have been been replaced with the new systems, and everything is running as intended. Over the upcoming two weeks we will begin to build and migrate clients to the new Client Servers. Clients will be notified in batches of the exact upgrade schedule and process for their servers.

19 Mar 2014

New Nameservers Online

As of this evening we have switched over ns3 & to their new homes. This step of our upgrade should require no client action, and no downtime to any client sites. For future reference all mention of ns3/ns4 on our site have been updated to reflect their new IPs: – Now (was – Now (was

Glue records for ns3/ns4 have been updated with the registrars for, so all clients using ns3/ns4 should have their nameserver traffic already re-directed to the new servers seamlessly.

Just as a precaution we will leave the old nameservers running for a few days before shutting them down, and will continue to monitor the DNS traffic to them to ensure everything has switched over.

Next up for our upgrades will be our backend systems (our own website, mailservers, and a few development servers). That will take a week or two for us to complete, and then we will begin migrating client servers and client websites. Clients will be notified in advance of any possible outage windows that will affect their sites, but we will strive to maintain as minimal downtime for everyone as possible.

18 Mar 2014

Server Upgrades, Incoming..

It's that time again, server hardware refresh time!

Over the next few weeks we'll be rolling out a series of replacement servers both for our backend systems (our own website, management servers, nameservers, etc) as well as replacements for our cPanel customer-hosting servers.

Details and announcements will be emailed to all affected users in advance of any outage windows, though our goal is to minimize any impact to customer websites and services as much as possible.

02 Jun 2011

Emergency Maintenance June 2nd

We've been seeing sporadic connectivity issues within our network this afternoon, affecting primarily Pure Energy internal systems (such as our own website, and Connectivity to servers hosting our cPanel Shared Linux Hosting clients has not been affected.

Datacenter Engineers along with Cisco TAC have identified a software bug on an access router which is causing these issues. Engineers will be performing an emergency code change this evening, Jun 2nd, 2011 at 10:00 PM EDT to resolve this issue. The expected downtime is 20 minutes with the maintenance window being scheduled for up to 4 hours.

Start Time: 10:00pm EDT (6/2/2011)
End Time: 02:00am EDT (6/3/2011)
Expected Duration: 20 minutes

Service Impact:

During this maintenance, the affected services will be entirely unavailable. While the upgrade duration is scheduled for 4 hours, we only expect around 20 minutes of downtime as the code is changed. During the actual outage our own website as well as any anything that relies on other internal systems (email notifications from the client management system, order processing, etc) will be unavailable and/or delayed.

Again, this will not directly impact clients of our Shared Linux Hosting services, as those servers are on a different network.

22 Sep 2010

Scheduled Maintenance Sept. 26th

Date: Sunday, September 26, 2010 (09/26/2010)
Start time (Easter): 1:00 AM
End time (Eastern): 5:00 AM
Location: WDC01
Duration: 4 hours

Engineers will be conducting maintenance on the transport network that provides connectivity between the WDC01 datacenter and the Equinix facility in Ashburn. This could impact public facing services hosted out of the WDC Datacenter, including the main Pure Energy Systems website,, and a select number of client VPS boxes located in the WDC Datacenter. The bulk of our Shared Linux Hosting (cPanel) customers are located in the Texas facility, and should see no noticeable affect on their services, aside from possible problems accessing the website.

28 Jun 2010

Scheduled Network Maintenance – July 3rd

Scheduled Network Maintenance
Date: Saturday, July 3, 2010 (07/03/2010)
Start time (GMT+00:00): 05:00 AM
End time (GMT+00:00): 09:00 AM
Services affected: Public / Private Connectivity
Duration: 4 hours

Our Upstream Provider's Engineers will be performing maintenance on circuits providing connectivity on both the private and public facing (Internet Connectivity) networks. Over the course of this maintenance, new capacity from each router will be turned up while also migrating other circuits for more direct connectivity.

During this maintenance, Engineers will make every effort to mitigate the impact to customer traffic; however, customers are likely to see routing anomalies, packet-loss, and increased latency on both the public and private networks.

10 Apr 2010

Apache 2.2.15 and PHP 5.3.2 Upgrade…

This afternoon all client servers for our Shared Linux Web Hosting users are being upgraded to Apache 2.2.15 and PHP 5.3.2… Each server will be down very briefly for a few seconds while Apache reloads. No major client-side changes or updates are expected to be required due this minor upgrade.

(c) 2020 Pure Energy Systems LLC - All rights reserved.

back to top