Shared Linux Web Hosting Archives | Page 2 of 4 | Pure Energy Systems
10 Mar 2019

Python support is here!

We’re happy to announce the immediate availability of Python support across all of our Shared Linux Web Hosting Plans.  This opens the possibilities of building your web application using Django, Wagtail, Web2Py, or any other modern web framework that’s written in Python.

We currently support Python 2.7, 3.3, 3.4, 3.5, 3.6, and 3.7 across all our servers.

Users can deploy a new Python application into their account by looking for the “Setup Python App” icon under the “Software” section in cPanel.   You’ll be prompted to select the Python version you wish to use, where the application should live folder wise (we do not recommend hosting them out of your public_html folder!), and the URL you want to associate with the application.

Once setup, cpanel will build a python virtual environment for you using the version you selected, and will deploy a simple “Hello World!” style python application into the folder.  You can then modify/build/create to your hearts content using python.

24 Feb 2019

HTTP/2 Now Standard

In the wake of our recent server platform upgrades, we’re happy to announce that as of this weekend, all of our shared linux web servers now support HTTP/2.   This is another one of those ‘under the hood’ silent updates that will help increase performance of websites when accessed via modern browsers, with no action required on anyone’s part to enable or benefit from the change.


02 Feb 2019

All Shared Hosting Plans Doubled in Size!

Resource upgrades are here!  We’ve been hinting about it pretty blatantly.   We’ve been itching to get this nailed down for a bit now, as it’s been a while since the last time we had any resource adjustments to the plans.  But we wanted to get all of our ducks in a row to maximize the potential impact before announcing any changes… and we’re very happy with the end-result!

Today we’re happy to announce a rather large upgrade to the Disk Space and Bandwidth allocations that come standard with all of our Shared Hosting Plans:

  • Linux Bronze – $5.95/month
    • Was 400MB Disk Space  |   8GB Bandwidth
    • Now 800MB Disk Space  |  16GB Bandwidth
  • Linux Bronze+ – $7.95/month
    • Was 600MB Disk Space  |  15GB Bandwidth
    • Now 1200MB Disk Space  |  30GB Bandwidth
  • Linux Bronze++ – $9.95/month
    • Was 900MB Disk Space  |  20GB Bandwidth
    • Now 1800MB Disk Space  |  40GB Bandwidth
  • Linux Silver – $14.95/month
    • Was 1536MB Disk Space  |  30GB Bandwidth
    • Now 3072MB Disk Space  |  60GB Bandwidth
  • Linux Silver+ – $19.95/month
    • Was 2048MB Disk Space  |  40GB Bandwidth
    • Now 4096MB Disk Space  |  80GB Bandwidth
  • Linux Gold – $29.95/month
    • Was 4096MB Disk Space  |  80GB Bandwidth
    • Now 8192MB Disk Space  |  160GB Bandwidth


Yes, that is correct, we’re doubling the amount of resources available in every plan.

No action is required on anyone’s part to take advantage of these new plans, all existing clients have already had their resource allocations increased to the new levels.


27 Jan 2019

ModSecurity: Default on Every Plan

Back in November we discussed the need for web application firewalls and discussed some of the options out there for securing your site with one.  And, well, shortly there after someone reminded me:

Hey, wait, don’t we have ModSecurity implemented everywhere?

Oh yeah, ModSecurity, the old faithful of generic purpose WAF systems.   To be honest, we’ve run it across all of our servers for at-least a year now, silently, in the background, with nobody noticing.  We’re currently utilizing the OWASP Core Ruleset on all of our servers, and while it does detect and prevent a wide range of ‘standard issue’ web based attacks (Cross Site Scripting, Code Injection, SQL Injection, etc), the fact is, it’s we have to be deliberately conservative in what we detect and block at the server level.  Something that may be ‘fishy’ under to one website or code stack could be ‘business as usual’ on another site, so it’s not possible for us to make the rules “extremely secure”, because we don’t want to incorrectly block traffic that someone may actually need for their site to function.

Think of our ModSecurity implementation as a ‘first line defense’, it’s keeping an eye out for the really unscrupulous traffic, the things that we can look at and say “no way that can be legit traffic!”.  But the more granular, focused, specific needs of a given web platform? That’s where the need for a Web Application Firewall specific to your own site and needs comes in.

One thing we have rolled out in the last couple weeks, just in case, is the ability for clients to disable ModSecurity on a given domain under their account.  We don’t recommend it, and I believe we’ve only had one instance of a client really really wanting to do it, but that option is there now.  By default, we enable it everywhere, but if you go into cPanel, under the “Security” section, there’s now a ModSecurity area where you can disable it on a per domain basis.  It’ll warn you that this is potentially unsafe, and not recommended outside of debugging purposes (usually to prove that it’s not ModSecurity causing an issue with a site), but, if you need it, the option is there.

26 Jan 2019

IPv6: Ready, but not yet Prime-Time

IPv6 is one of those weird tech initiatives, in that it’s something everyone seems to agree needs to happen, but actually getting there is just taking way longer than everyone seemed to think it would. We’ve been running IPv6 on many of our own platforms and services for a while now, but coverage has not been 100%, nor had we fully deployed it to customer hosting servers and websites, until now.  Today we’re happy to announce that all customer sites and services are now fully available via IPv6.  Now,  odds are, either you’re reading this and going “Nice”, or you’re going “What the heck is IPv6?”, so lets take a quick moment to cover some likely questions you may have…

What is IPv6 and why do we need it?

Every device that’s directly connected to the Internet needs a unique address that identifies that specific machine.  The internet as we’ve had it all these years runs on a protocol called IP, more specifically, IPv4.  IPv4 gives us unique 32bit addresses that look like this:   Then we use DNS to tell the world “ =”, when you enter or click on our website URL, your computer looks up the DNS name, and gets back that unique address, and that’s how it knows where to connect to pull up our site.

IPv4 addresses can range from “” to “”, giving a little under 4.3 billion possible unique addresses ( I hear the deep tech folks groaning already, but bear with me).. due to the way IP address are carved up into into subnets, and the way a number of ranges were reserved for other uses way back in the early days of the Internet, we don’t actually have that many to go around.   Over half a billion where marked ‘reserved’ right off the bat for things like “inside” network space, multicast, etc, so the true number of usable IPv4 addresses is quite a bit smaller than 4.3 billion.

Now, keep in mind, while the Internet as we now enjoy it didn’t exist quite yet, IPv4 was designed in the early 1980s, so at that time, I’m sure the idea of “more than 4 billion devices all sharing the same global network” seemed like “Yeah, that’s not going to be a problem, ever!”   But of course, over the years, we’ve, well, we’ve used them up.  It’s been an ongoing issue for quite some time, but there have been workarounds that have kept things going without major issue:

  • NAT/Proxies/Firewalls.   Odds are you probably have more than one internet connected device in your house.  PCs, laptops, tablets, gaming systems, cameras, etc.  They all have an IP address, but likely not a “public” IP address.  It’s fairly common practice for your ISP to provide some sort of gateway/router device that actually obtains one public IP address, and then handles NAT for all of the devices inside your home.
  • Some of the previously “reserved” space has been “unreserved” and allocated out to the regional registries.
  • Some larger companies that hard large swaths of IP space allocated to them have returned some, or in other cases no longer function as entities and returned huge swaths to be redistributed.  (HP, or companies they merged with/acquired over the years at one point had 64 million IPs that they turned back to the registries)

I don’t want to veer too far into the discussion of IPv4 Exhaustion, but the wiki page linked there gives a great overview of how we got here.  But the basic gist is, while IPv4 got us to where we are today, something different is going to have to take over at some point.

Where did this whole IPv6 thing come from?

Thankfully, in the early 1990s (even then, the Internet was still not “the thing” it is today), someone had the foresight to think that 4.3billion might one day not be enough addresses, so a bunch of folks got together and started brainstorming.   While early versions of IPv6 support made it into things like the Linux kernel in mid 90s, we actually didn’t have a “Draft Standard” for IPv6 until late in 1998, and it didn’t become a true “Internet Standard” until July 2017.  These things, clearly, take time.

So what does IPv6 bring us?  Well, an IPv6 address looks like this:

2604:a880:0:1010:0:0:76:7001   (Again, our main website)

It’s a mouth-full, no doubt, and it’s going to make all of us even more dependent on DNS than we are today.  But, it’s a 128bit address.  That means instead of the 4.3billion possibilities, we now have…  well, billions and billions of possible addresses.  (340 billion billion billion billion addresses, give or take).  So yeah, it should solve our IP address shortage.

Why is it taking so long?

It’s taken quite a while just to get the standard nailed down.  And it’s taken even longer to figure out exactly how to implement it in every scenario.  Then you have the classic adoption problem, nobody wants to be the first ISP to offer “IPv6 only” access, if there’s shortage of content available on IPv6, so ISPs continue to scrounge around and find more IPv4 addresses they can utilize, and (as far as I’m aware), nobody has (yet) been forced into “IPv6 Only” land.

And until there are customers on the IPv6 network, there’s no push on the content providers into offering content on IPv6….  Chicken, meet egg.

Dual Stack implementations solve for this, in that with a Dual Stack configuration, you give your machine both an IPv4, and an IPv6 address, and you can be connected to and connect to others via either one.

So for instance, all of our servers now have an address in both IPv4 and IPv6.  We tell things like our web server to listen on both, and now we’re accessible on both addresses.  Then we publish both via DNS (While IPv4 addresses are stored in ‘A’ records, DNS has a separate ‘AAAA’ record for IPv6 addresses.)

So now, the content is there, even if the visitors are not, just yet, there in large numbers.

So what does IPv6 mean for me?

All the “under the hood” work to make this work for your sites hosted with us is already done.  All of our servers now run in Dual Stack mode, and we’ve ensured web, mail, and other services on every box are listening on both the IPv4 and IPv6 addresses.

So in general, not a whole lot really changes for you just yet, but it’s something you’ll want to be aware of, especially if you write your own code for your website.  You’re going to start seeing those “new, longer addresses” show up in things like your website logs, and at first, it’s going to be a bit confusing and unsettling. 😉

Here’s the part that will blow your mind (it blew mine), there’s a chance, however small, that you may be using IPv6 to read this right now and not even know it.  Many of the ISPs that have started implementing IPv6 are doing so with Dual Stack implementations, quietly, in the background.  A couple days after implementing IPv6 on our own website, we noticed the IPv6 addresses appearing in our client portal logs.  Clients were connecting to the site via IPv6, and they probably didn’t even know it.  That’s, quite honestly,  rather astonishing for something as fundamental to the Internet as IP, that the entire thing can be shifted around under the hood, and a visitor doesn’t even need to notice it.

While most ISPs are being fairly quiet about their embrace of IPv6, there are some larger, established ISPs starting to really make inroads with IPv6, and the number of folks who have IPv6 available to them continues to climb.  It’s not ready to take over the world yet, obviously.  Or own internal observations from our servers show about 3-5% of our traffic comes in over IPv6, and I believe that number is slightly skewed higher by our own servers preferring to talk with one another on IPv6.

But the data consumers are starting to arrive via IPv6, and now with this rollout, we’re ready for them.

If you are interested in finding out the state of your own internet connection, and if it is IPv6 enabled, feel free to visit the IPv6 Test website.

21 Jan 2019

Server Migrations and The Future…

As previously discussed back in November, we’ve been working on a project to upgrade all of our server infrastructure to both stay up to date with the latest operating system releases, as well as to unlock some new technology and ultimately improve the service we provide to our clients.   This project has taken a couple weeks longer to complete than we had originally anticipated as we discovered a couple of additional benefits of the new plan, and wanted to properly investigate how to best implement them, but I’m happy to say that we can now see the light at the end of the tunnel.  Within the next two weeks we should be wrapping up the last of the account transfers, which means that since we can now see the light at the end of migration tunnel, it’s time to talk a bit more about the changes that occurred, and to start thinking ahead to what we do once we do come out through the other side.

Under The Hood

There’s a number of “geeky fun” changes under the hood that will improve both stability and performance of all our servers.  I’m going to add a TODO item on my list to talk about them a bit in-depth in a separate blog post, so that folks who are interested in that type of thing can geek out with me, while everyone else can just focus on the user-noticeable changes.

What I will say now though is, for the first time in a very long time, once this migration is complete, all of our hosting servers will be on the same hardware platform, the same OS releases, and the same software stack, all configured in the very same way.  And that makes our lives much easier going forward.

User Noticeable Changes, Day 1

No more additional cPanel Themes

For anyone who didn’t notice, some of our servers had additional cPanel themes installed on them, allowing users to choose how cPanel would look to them.  It was something we implemented a couple years back as a bit of lark to see how people liked it.  At the end of the day, data showed us that only a small subset of clients ever checked out the alternate themes, and even fewer stuck with them.  The confusion they created by not matching our documentation only caused problems.  We actually stopped rolling these out on new servers quite some time ago, but with the migration project, the last vestiges of “alternate cPanel themes” will cease to exist entirely.

Ruby (Rails) Support Goes End of Life

Late in 2008 we added support for Ruby on Rails applications to our servers.  With the way these integrated with Apache/cPanel, it was always something of a headache for everyone involved.  Applications needed to be setup for Passenger, firewall ports needed to be opened,  it was, quite honestly, just not as easy as PHP, for us or for our clients.  Much like the cPanel themes, client interest was extremely minimal.  In the last 10 years we’ve had perhaps a couple dozen inquiries around Ruby.

Because the implementation of Ruby under cPanel was always somewhat sketchy, we never felt comfortable marketing it as a big selling point, and at the same time, I’m sure that true Ruby aficionados tend to steer clear of cPanel based environments for their Ruby hosting needs because they’ve heard the stories.   At the end of the day, we want to focus on what we can do best, and always offer a service that we’re proud of, so we’re discontinuing Ruby support across our entire fleet once and for all.  From what we’ve seen during the migrations, there are no active Ruby applications hosted on any of our servers, so I believe this is more of a housekeeping / cleanup issue than something that will actually impact clients, but we want to be transparent about Ruby’s removal.

Better Performance

While I’ll cover this more in depth in the later, ‘technical nitty gritty’ post, the short answer is that we believe all clients will see an improvement in the performance of their sites.  Even clients on servers that were already at our current standard level in terms of hardware will see an improvement based strictly on the performance gains we’re seeing utilizing the new software stack.

User Noticeable Changes, The Future

Plan Resource Upgrades

Hardware gets more powerful/less expensive with time.  Our policy has always been to pass our savings onto our clients in terms of increased resource allocations in each of our plans.  It’s been 4 years since our last resource adjustments to our plans.   We’re due for an upgrade, and getting all of our servers upgraded is the last blocker to us unveiling our next round of upgrades.  While we’re not quite ready to release the hard numbers just yet, I am comfortable in saying this much:

Our 2015 upgrades resulted in around a 33% increase in resources.
Our 2019 upgrades will exceed that percentage increase.

More details will be forthcoming regarding the resource upgrades once we get the last few servers migrated.


05 Nov 2018

Upcoming Server Transfers

In just a couple weeks we’re going to schedule some maintenance windows in order to migrate client accounts around to facilitate some server replacements.  Now, I know what you’re thinking:

“Didn’t we just do this four years ago?  Wasn’t moving to the cloud supposed to do away with these hardware refresh cycles?”

It’s true that virtualization and “the cloud” has empowered services such as ours in ways never dreamed off in the days of “a physical server for every need”, but there are a couple caveats and things to watch out for, including two forces that have come into play in our situation:

The Cloud Is Still Built On Actual Hardware

While it’s true that we no longer have to directly touch hardware, our infrastructure is still ultimately tied to physical hardware.  That hardware exists somewhere, and someone has to feed it, care for it, and eventually replace it.  The continual commoditization of PC server level hardware means that the newer stuff is generally faster and cheaper than the stuff from a few years ago.  This leads our providers into interesting situations where they end up wanting to encourage people towards the newer hardware, so they can decommission the older systems.

This is currently happening with us.  A couple months ago I took a phone call that started something like this:

“How’d you like 33% more RAM, 66% more Storage space, and faster, newer generation CPUs, at the very same prices you pay today?”

Now, I’m no fool, so I asked what the catch was.  And I learned: we’d have to migrate our existing servers to their new hardware infrastructure.  The new hardware is in the same data-centers and has all the same connectivity as our existing hardware, but we’d have to migrate over in order to enjoy the additional resources.   Thankfully they offer a “single click” button migration that would take care of everything for us, we just hit the button for a given server, it goes offline, transfers to the new hardware, and spins back up…  about 3 hours later.

Okay, not really the best option in the world, but something for us to consider.  After all, more resources are always a good thing,  we always like getting additional resources for the same price, that means we get to inject more resources into everyone’s hosting plans!  But.. a roughly 3 hour downtime for each server?  That’s kind of a big chunk for us to commit to, even for a significant resource increase, alone.

But there is another aspect to consider…

The Lingering Operating System

Believe it or not, in the last 8 months I have personally laid eyes on a production Red Hat Enterprise Linux 3 server, being used in a very critical and production oriented way at a customer site.  For anyone who doesn’t know, RHEL3 was released in 2004, the last released update was in 2007, and the entire release was announced as End of Life in late 2010, over eight years ago.  The machine in question was stood up circa 2006.  It’s twelve years old, and while it’s a security vulnerability nightmare, it lives on today in all of its 32bit glory.

Why?  Because it’s never needed to be rebuilt.  The company in question was an early adopter of server virtualization.  This rickety old machine was one of their first virtual systems, and it has persisted, and ran proudly, on numerous stacks of underlying hardware over it’s 12 year lifespan.  Virtualization and the flexibility it has brought us has minimized the number of situations that used to lead to a server getting rebuilt from the ground up.  While this is great for uptime and SysAdmin sanity, the dark lining is that it sometimes allows old machines to persist longer than they probably should have.

This story isn’t that unique, we’ve seen countless instances of “It’s still running, so we left it be” over the years, and we’re even guilty of it ourselves.  While we moved to CentOS 7 as our platform of choice shortly after the release CentOS 7.1, we’ve still got a fair amount of CentOS6 still running in our environment today.  While CentOS6 is not scheduled for a full “End of Life” until the end of 2020, we want to get ahead of the curve.


So with the these two datapoints lodged in our minds, we started thinking about the benefits of ‘refreshing’ our existing servers.  We built a list of possible benefits:

  • More resources, same cost.
  • Move everything to a newer Operating System.
  • Additionally, we want to move from the basic CentOS platform over to CloudLinux.  CloudLinux adds in a bunch of features and abilities that will benefit us in terms of server stability and management.
  • Look at retooling our systems to utilize PHP-FPM instead of SuPHP.  (Again, increasing performance for clients!)


And then we looked at the downsides of performance such a “full refresh” and weighed out the options before us:

Do Nothing

  • — No resource upgrade
  • — CentOS6 continues to live on, with a necessary replacement in the next 24 months.
  • +++ Zero work on our part.

Migrate, but don’t “refresh”

  • +++ Resource Upgrade!
  • — CentOS6 continues to live on, replacement still necessary within 24 months.
  • — 3 hour downtime per server
  • -+- Schedule downtime, “push one button” migration process

Build New Servers and Migrate Clients

  • +++ Resource Upgrade!
  • +++ Operating System Upgrade!
  • +++ Much less downtime per client!
  • — Most amount of work required on our part.


So, looking over the three possibilities, it became clear that, well, it makes sense to invest the work and do things right now. (Sorry team!)   Our plan is rather simple:

  1. Build out a new server, in the new hardware environment.
  2. Install everything, get it configured the way we like, test everything out.
  3. Schedule a window to migrate all customers from one “old” server over to the new one.
  4. Repeat steps 1-3 for each server that is getting a refresh.

Now, the only part of this that is impactful to our clients is step (3).    We’ll do all our normal tricks to minimize the downtime (lowering DNS TTLs, etc), but we can’t make the downtime go away entirely.  What we can (and will) do is transfer accounts one at a time, so instead of your website being offline for 3 hours, it will be down for a period of time measured in minutes, based on the size of your individual account.  (We usually lower DNS TTLs to 10 minutes, and most accounts transfer within that period of time).

A few points of interest:

  • We’ll be emailing all the clients on each server in advance of their maintenance window.  Generally we aim for a 5-7 day heads up for something like this.
  • Server names and IPs will be changing.   For cPanel clients who host their DNS with us (your domain is pointed to and, no action will be required on your part in order for your site to perform normally after the migration.  If however you have something out there hard-coded to a specific server IP address, you will need to adjust some things.
  • New server names and IPs will be included in the announcement emails that go out to clients on a server before it is relocated.
  • When your account is relocated, your account information in our portal will be updated at the same time.  So if in doubt, you can always use the links within our client portal to access your cPanel interface.


Our aim is to begin the migrations in the next 10-14 days, and to have the entire project wrapped up before the end of the year.

16 Sep 2018

Simplifying our plans…

I stumbled across something the other day that kind of made me giggle, it was a piece of marketing copy that we were using in the spring of 2002 to advertise our newly released to the public services.  Within the copy was an overview of our “Linux Bronze” Plan…

Linux Bronze Plan – $5.95/month
– 100 Megs Storage
– 3 GB Transfer
– 5 SubDomains
– 20 Email Accounts
– 5 FTP accounts
– 5 mySQL Databases
– 1 Mailman Mailing List

Now, our pricing hasn’t changed, and our plan names haven’t gotten any more imaginative, but our plans themselves, they’ve grown of the years  (And honestly, they’re about to do so again, but there is another project we need to finish before we can talk about that, so I’ll leave that for another update) as technology has continued to improve and costs have gone down.

To put things in perspective, at the time we launched this service, our original shared hosting servers were, if I recall properly, Pentium3 (600mhz or thereabouts?  single core) boxes sporting a lofty 1GB of RAM and if I recall, 40GB drives.

So at the time, storage and transfer, while important, were not the only resources we needed to balance and manage.  Adding a subdomain to a website?  There was a few kilobytes of RAM there.  Email accounts?  A few kb more for each of those.  So our plans also established a limit of things such as that, as shown in the list above.  And over the years, those quotas have increased as well, but to be honest, they’re really only there today because of a very deep seated dislike of the term “unlimited” around here.  Early on we watched so many “Unlimited <X>!” hosting firms crash and burn that it left a very bitter taste in our mouths for the word.  So much so that I simply refuse to tick the ‘Unlimited FTP accounts’ button in cPanel.

It’s not just the tainted history of the word, there are a very real, tangible issue that comes to mind.  The rule here is “If we offer something, we have to be able to actually provide it”.  So if we say “Unlimited Email Accounts!” it needs to be doable, in real, unlimited form.  Now, the fact is, if there was no quota on email accounts, and someone were to script a “log into cpanel and create random email accounts for my site” routine, it would eventually crash, and it might even impact performance for other people on the server.   Sure, it’d probably hit a couple million mailboxes before a problem occurred, and I cant think of a reason anyone would do it to other than to see if they could, but eventually a process would run out of RAM, or inodes, or.. something, and the server, along with all the other clients on it, would suffer.

Offering anything in “unlimited” form just doesn’t work when actually put to the test.

But at the same time, it’s now 2018, and defining account plans by the number of domains or email accounts just feels… dated.

So we’re not going to do it anymore.

Going forward, all of our shared hosting linux plans are purely built around the amount of Disk Space and Data Transfer.    Nothing else.   Now, this does not mean “unlimited” domains, email accounts, etc.  There’s still a limit built into all of our plans for these things.  It’s just…  well, it’s set high enough that we don’t think anyone is actually going to encounter, or probably even get close to, the limit.  And if you do start to find yourself getting close, open a support ticket, and we’ll look at the situation and see what we can do to accommodate your needs.

I’m internally calling it  “reasonable limits”.  In my mind it’s a bit of “if you use them for any reasonable use, you’ll be fine” and a bit of “if you come up with a way to use them that we didn’t think of, we’ll reasonably review the situation with you”.

But whatever we call it internally, externally, it means our plans just got a lot simpler.

27 Aug 2010

System Upgrades

Tonight all client servers for our Shared Linux Web Hosting users have had updates to a number of various software packages to bring them all to the same level. All client servers are now running:

Apache 2.2.16
PHP 5.3.3
Ruby On Rails 2.3.8

No major client-side changes or updates are expected to be required due this minor upgrade.

Part of the reason for this upgrade is to pave the way for the migration from MySQL 5.0.x to MySQL 5.1.x, which we intend to undertake within the next couple weeks. The upgrade to MySQL 5.1 will require rebuilding anything that “touchs” MySQL (ie: PHP), and we wanted to ensure all servers were running the same versions before hand.

(c) 2020 Pure Energy Systems LLC - All rights reserved.

back to top