11 Nov 2019

Breach at Web.com: Why you might care and not even know it.

yellow warning tape that reads "crime scene do not cross"Just under two weeks ago, the folks at Web.com group announced they had an incident and that their systems had been compromised.  Reading through their release, they make the following assertions:

  • No credit card data was compromised as a result of this incident.
  • Contact details such as “name, address, phone numbers, email addresses and information about the services offered to a given account holder”

Now, while it appears they believe that customer passwords were not disclosed, they are having everyone reset their passwords as a precaution, just in case.

We’ve had a few clients reach out asking about this breach in particular, as they’ve received an email but are confused as they don’t remember having done business with ‘web.com’ in the past.

Web.com also owns the following other companies you may have had a relationship with in the past:

  • Register.com
  • Network Solutions

So, given the popularity of Network Solutions and Register.com as domain registration providers in the past, odds are you’ve received the email regarding an account under one of those two names instead of ‘web.com’.   Unfortunately, in a couple of conversations, we’ve discovered clients who had an old, no longer maintained email address connected to a domain registration with Network Solutions, and so they never received an email regarding the situation at all.

Our suggestion is that if you have an account with any Web.com group entity, including Network Solutions and Register.com, and have not already done so, to change any passwords you have there at once. 

And if you are still practicing poor password hygiene by using the same password for multiple services/accounts, we’d recommend changing your password at any site that used the same one as well.  If the actor who snagged the data from web.com does manage to decode passwords, someone will certainly begin trying to utilize those email/password combinations at other popular sites.

16 Jun 2019

The Evolution of PHP under Apache

Over the sixteen years we’ve been doing this web hosting thing, we’ve seen a multitude of changes to the way webservers run, and more notably, in how they run scripts under, for instance, languages such as PHP.

In the beginning: PHP as CGI

In the beginning running a PHP script was a fairly simple affair.  The Apache webserver of the day used a preforked child based process for handling requests, where the main web server had a pool of child processes, and as each web request came into the server, it would be allocated to a worker, the worker would process that one request, and if a PHP script was involved, a separate CGI process would be spawned out to run PHP and process the scripting part of the page.   There would be a whole slew of settings in your apache configuration that would boil down to “PHP files located in /home/bob/public_html/ get run under the ‘bob’ user account”.

This worked, but it was anything but fast.  As time went on and PHP became more popular, and the sites that it powered became more complex, performance became a big issue, in order to keep things efficient and cost effective, a better way had to be found.

Enter mod_php

Along came mod_php, which sort of up-ended the entire apple cart compared to how we all thought of PHP.   mod_php was an actual module that you linked into Apache itself.  Now your PHP scripts were basically interpreted by the webserver.  No longer did Apache have to ‘spawn out’ another process to render your PHP content.  This was fast.  Then OpCode caching became a thing, with add-ons such as the APC module, where now you could allocate some RAM to be used to persistently cache the PHP code (and possibly other data) in between requests.  Things got extremely fast, and there was much rejoicing on the parts of web hosts everywhere as CPU usage decreased on servers everywhere.

Downside, this meant all PHP scripts needed to run as the same user that Apache ran as.  This meant that everyone’s scripts ran as “nobody”, or possibly “apache”.  Having all the PHP scripts on a server that hosts multiple clients and websites all run under the same security user account was (although I don’t believe anyone initially saw it coming at the time) a huge problem.  The industry started to see a rise in the number of defaced and hijacked sites, and it quickly became evident that the “everything runs as nobody” was to blame.

Malicious actors quickly built PHP scripts that when run, would scour a server, for instance by looking for any *.php files in /home/*/public_html/ on the server, and every time it found a php file it could write to, it would then inject whatever malicious code it desired to spread.  Since all the php scripts on a server ran as the same user, that user account would, by default, need access to all the scripts on the server.   It only took finding one single exploitable script on a website in which they could launch their script to allow the malicious actor to infect (or delete) all the php scripts on the server.  This of course, was not a good solution, as now every script and client account on your server was only as secure as the most out of date, un-patched script on the server!  Something else would need to come along…

(Side Rant:  If, in 2019, you find a web host that is still suggesting that you chown your scripts to “nobody:nobody” or “apache:apache”, be very afraid.  The reasons for this fear should be evident from the last paragraph, but I feel the need to re-iterate the point, because doing so is, well, just a huge mistake security wise!)

SuPHP to the rescue

Then along came SuPHP, which combined most (but not all) of the performance benefits of mod_apache with the flexibility of scripts once again being able to run as individual user accounts.   It was something of a balanced trade off between “fast” and “secure”.

Apache Gains Workers, But They’re Useless To Us

Along with other great advances to the Apache Web Server, eventually it gained the ability to have ‘Workers’ instead of just ‘Preforked Children Processes’.  While child processes worked fine for years, the overhead of ‘every connection has a dedicated child process that feeds it’ involves a fair amount of overhead.

So the Apache team implemented the ‘Worker’ MPM module.  The server still forks a group of child processes in advance, but each child process is capable of serving multiple http connections via Threads.

This brought a sizeable boost in performance on hosting servers (notably a decrease in memory needed to service a given number of http connections), but it had one big drawback… it didn’t work with PHP, so it was no good to us for quite some time.

mod_lsapi:  The best of all worlds?

With the recent migration of our servers over to the CloudLinux platform, we’ve gained the ability to leverage the lsapi Apache/PHP interface to achieve a modern ‘best of all worlds’ approach:

  • Apache runs with the mpm_worker module, allowing one child process to handle multiple connections via threads.
  • lsapi interfaces those workers to php via php-fpm as needed when an php file needs to be processed.
  • php-fpm gives us flexibility, security, and performance all in one.

Wait, what’s PHP-FPM?

PHP-FPM is sort of a ‘php server’ if you will.  It is responsible for maintaining a pool of PHP processes that can be handled your PHP code to process.

The flexibility comes in that we can control the version of PHP on a per site level, which is how we can provide the ability for users to control which version of PHP their code runs against.

On the security front, each user has their own self contained php server process, providing a ‘pool’ of PHP processes at their disposal, each running as that user, so no worries about ‘nobody’ permissions or users being able to access each other scripts.

For performance, well, we ‘persist’ the pools per user for a set amount of ‘idle time’ after the last PHP script was processed.  This means that while on the first PHP request for a given client site, we need to (quickly) spin up a PHP process, that process will live-on after the request is finished (up to our idle timeout), if a second request comes in before that time-out ends, it is serviced by the same process, saving start up time, and more importantly, allowing us to once again benefit from the performance boost of PHP OpCode Caching.

The ability to use a PHP pool, when combined with the per-user resource scheduling available to us in CloudLinux, really opens things up possibility wise, allowing us to fairly distribute resources while maintaining performances across all users on a given server.

For instance, here’s the CPU and RAM utilization over a given day for a typical user on one of our shared linux webhosting servers:

cpu and memory graph showing normal traffic activity

Now, this users site got a pretty normal consistent level of traffic throughout the day, it of course had some spikes here and there, but overall, pretty even-keeled distribution of traffic.   You’ll note that the memory usage is pretty even throughout the day, even though CPU jumped a bit more here and there..  This is because even though the clients site may have needed more processing time, the standard PHP process pool for them was already running, and simply kept going handling the visitor load.

Now, lets see what happens when things get… busy.  This customer had a far more interesting day:

cpu and memoryg graph showing peaks caused by a slight DDoS attack

Now, this graph is a bit of an anomaly, not enough to cause a panic, but, well, they were dealing with a slight DDoS attack, in for the form of someone attempting to brute force user logins to their forum.   The traffic was pretty steady throughout the day, but with large spikes in traffic at random times along the way.

While CPU utilization got pretty intense at times, RAM utilization stayed pretty consistent (most of the time)..   there were a couple times throughout the day when the system determined it was unable to provide all the cpu resources demanded by the attacker, so things did get throttled a bit here and there with the ebb and flow of traffic coming in from the attacker, but the ability for the server to maintain a single, persistent pool of PHP processes to handle the requests, complete with OpCode caching and all the other benefits of reusing the same process for multiple requests, allowed the overall impact to memory to stay pretty consistent throughout the day.

The important thing in this scenario is, that with the persistent php process and OpCode caching being available between requests, the overall load on the server generated by this attack was minimized.  The system did it’s best to maintain service to the client’s website throughout the ‘bursty’ attack periods, and no other client sites hosted on the same server saw any impact at all.

This is a huge change from the days of ‘PHP as CGI’, and we’re confident that Apache Workers + mod_lsapi + php-fpm gives us just the right combination of security, flexibility, and performance going forward to best serve our clients.

 

27 Jan 2019

ModSecurity: Default on Every Plan

Back in November we discussed the need for web application firewalls and discussed some of the options out there for securing your site with one.  And, well, shortly there after someone reminded me:

Hey, wait, don’t we have ModSecurity implemented everywhere?

Oh yeah, ModSecurity, the old faithful of generic purpose WAF systems.   To be honest, we’ve run it across all of our servers for at-least a year now, silently, in the background, with nobody noticing.  We’re currently utilizing the OWASP Core Ruleset on all of our servers, and while it does detect and prevent a wide range of ‘standard issue’ web based attacks (Cross Site Scripting, Code Injection, SQL Injection, etc), the fact is, it’s we have to be deliberately conservative in what we detect and block at the server level.  Something that may be ‘fishy’ under to one website or code stack could be ‘business as usual’ on another site, so it’s not possible for us to make the rules “extremely secure”, because we don’t want to incorrectly block traffic that someone may actually need for their site to function.

Think of our ModSecurity implementation as a ‘first line defense’, it’s keeping an eye out for the really unscrupulous traffic, the things that we can look at and say “no way that can be legit traffic!”.  But the more granular, focused, specific needs of a given web platform? That’s where the need for a Web Application Firewall specific to your own site and needs comes in.

One thing we have rolled out in the last couple weeks, just in case, is the ability for clients to disable ModSecurity on a given domain under their account.  We don’t recommend it, and I believe we’ve only had one instance of a client really really wanting to do it, but that option is there now.  By default, we enable it everywhere, but if you go into cPanel, under the “Security” section, there’s now a ModSecurity area where you can disable it on a per domain basis.  It’ll warn you that this is potentially unsafe, and not recommended outside of debugging purposes (usually to prove that it’s not ModSecurity causing an issue with a site), but, if you need it, the option is there.

22 Dec 2018

New SSL Certificate Offerings

Earlier this year we were very happy to announce that we were going to be able to start offering Free Domain Validated SSL Certificates to all our our hosting clients, backed by COMODO CA and issued by cPanel.

Thanks to the integration with cPanel, clients would be able to gain this benefit with zero work on their part.  cPanel would handle the issuing of the certs, provisioning them into the hosting account, and even handle renewing them every 90 days as they came up on their expiration dates.  It was truly “zero hassle SSL”.

Given the state of the web in 2018, and the growing trend towards “https everywhere”, we were very excited to be able to provide this much needed service free of charge to all of our clients for use with their Pure Energy hosted websites.

The introduction of AutoSSL to our feature lineup has helped to shine a light on the topic of SSL Certificates for our customers, and this has led to a number of questions regarding SSL certificates, their usage, and the limitations of the AutoSSL feature:

  • How can I get a “Green Bar” SSL Certificate?
  • Can I get a “Secure Site Seal” for use on my site?
  • I need a certificate for <X>, and it’s not actually my site that’s hosted with Pure Energy.
  • Whats with this 90 day expiration thing?

 

Previously, when these questions would come up, we would generally point the person towards either RapidSSL, or GeoTrust, depending on what exactly they were looking for.    They would have to procure the certificate directly from the CA, and then, if they wanted to use the cert with their Pure Energy hosted site, venture back thru the gamut of “SSL Cert installation” via cPanel.  Now, to be fair, cPanel does a great job at making this as painless as possible, but even with cPanel’s help, SSL Installation can still be a bit…  cumbersome at times.

So, starting today I’m happy to announce that in addition to the AutoSSL feature, which is still included free of charge for every one of our hosting clients, we’re also going to be offering the following standard SSL Certificates for purchase via our Client Portal:

 

Pricing across the board is far lower than the rates that RapidSSL and GeoTrust charge directly, with certificates ranging from $17.95 for a 1 year RapidSSL Certificate to $279 for a 1 year GeoTrust QuickSSL Premium Wildcard.  2 year certificates are also available, generally at about a 15% discount over (2) 1 year certificates.

These certificates, while they are not included free of charge with your account, will have the standard 1 or 2 year renewal term (your choice), can be used on sites/services/things other than the site you have hosted with us, and will come with all the standard features/warranties/site seals that RapidSSL and GeoTrust offer with said certificate.

If you are a Pure Energy Systems hosting client, and you order a certificate via your Client Portal account for a website domain that you host with us, the portal can even handle provisioning the certificate into your cPanel account once the order is complete and the certificate authority issues the certificate.

It’s really a combination of best of both worlds, and we’re hoping that between “Free AutoSSL” and “Paid Certificates”, we can help do our part to make “https everywhere” as painless and cost effective as possible.

 

 

11 Apr 2014

Heartbleed, SSL, and What You Need to Know…

So earlier this week the IT world got a nasty little shock in the form of the Heartbleed Bug, a horrid little slip of code in the Open Source OpenSSL library that is causing headaches for IT folks the world over by now.

Long story short, versions of OpenSSL released since March 2012 (v1.0.1) up until this week have a bug that allows an attacker to gain access to “leaked” chunks of server memory, in some cases revealing sensitive information. We've seen reports of this being used to access username/password combinations, and other pieces of information that may be stored in a server or devices memory at the time of the exploit.

At this point we have confirmed that all Pure Energy Systems managed servers and systems are not vulnerable to the Heartbleed exploit. In some cases (such as our older CentOS5 based servers) the OpenSSL library on the device was entirely unaffected (being based on v0.9.8) while in some newer cases, most notably the new nameservers we just rolled out over the last couple weeks, the security updates that fixed the bug were rolled out onto servers as soon as they were available from the various software vendors.

Our upstream providers (datacenters, etc) have been running around doing the same, identifying which of their systems are accessible to the internet at large, which ones may be impacted and patching where applicable.

As much as I hate to say it however, I do not believe we, the internet at large, have quite seen the last of the fallout from this bug as of yet. Due to the widespread adoption of the OpenSSL library, we've seen and heard of all sorts of devices and application stacks out in the wild that appear to be impacted by this bug, everything from Firewalls, Routers, Virtualization Platforms, even some Phone Systems.

I'm almost certain there will be many tech folks running around over the weekend who'll be testing, verifying, and determining which of their company's products are impacted, and I suspect we'll be seeing “Security Update Required!” notices coming out from anyone and everyone over the next few weeks. Keep an eye on your inbox for updates from any company's you utilize, and if you have any type of internet facing device or system (for instance, a home broadband router) with an SSL protected port that is available to the internet, it may be wise to check with the vendor to see if they have determined it's status.

16 Oct 2009

Bogus Emails, not from us!

The last week or so we've seen a dramatic rise in a particular type of scam email, both to our personal inboxes, and more recently, to clients as well, the emails in question usually reads something like this:

From: noreply@<yourdomain>
To: <you>@<yourdomain>
Subject: A new settings file for the <your email address> has just been released

Dear user of the <yourdomain> mailing service!

We are informing you that because of the security upgrade of the mailing service your mailbox (<youremail>) settings were changed. In order to apply the new set of settings click on the following link:

<a link that appears to be to your domain, but is really to some bad software they have hosted somewhere>

Best regards, Technical Support.

The second version we've seen omits the link, but actually has a zip file attached to the email that you are encouraged to open and run.

Let's just get this out of the way, These emails are not from our technical support staff, the emails are most likely designed to lure you into running the file (or visiting the website) in order to infect your machine with some form of… bad thing. Maybe it's spyware, maybe it's adware, maybe turns your machine into a bot and proceeds to start spamming out the same email to other unsuspecting folks.

It's extremely rare that anything we do will result in you getting an email from us that says it's from @yourdomain, there are, I believe, a couple of automatic emails (bandwidth warnings, etc) that may appear to come from email addresses such as “no-reply@ourservername.purenrg.com” or whatever, but never, ever, that I can think of, have we ever sent out an email that wasn't from @purenrg.com (not to say that someone couldn't spoof a fake “From” using our domain just as easily).

Moreover, I can't think of any instance where we've *ever* emailed a zip file to a client asking them to upgrade anything with it, especially to “update your email”… email settings are either entered by hand in your mail client software, or perhaps using the auto-configure links provided inside your cPanel, but never (Atleast from us), via an exe or other program file provided via a somewhat vague email from a invalid email address pretending to be your domain.

Normally we don't post about every new scam or virus email, but we've seen this one pop up quite a bit recently, and wanted to try and provide this post just in case. 🙂 If it prevents one person from getting infected with whatever the payload is in that attachment, then it's worth the effort to post about.

(c) 2019 Pure Energy Systems LLC - All rights reserved.

back to top