Archive for the ‘Industry’ Category

Data Center Experiences

Tuesday, June 28th, 2011

I’ve been in this industry for quite a while and have used many network providers, colocation providers, and hosting companies. We at 1-800-HOSTING even maintain presences at other providers for monitoring, custom projects, remote backups, etc… and sometimes the service at times has been less than stellar. Here are a few of the experiences I’ve had throughout the years and how we use this as motivation to make sure our customers don’t experience the pitfalls I’ve suffered through.


  • Answer the support telephone: When offering phone support it is important to make sure that someone is available to answer it. Nothing can be more frustrating than not being able to get someone on the phone quickly during an emergency.
  • Respond to emails promptly, even if it’s just a “Hey we’re looking into this and we’ll get back with you soon.” This lets me know that someone has looked at the issue and is aware. Setting a time of when I should expect a response is even better so I can plan out my already busy day.
  • Send maintenance notifications out with as much notice as possible. If it’s an emergency even an hour’s notice is much better than none at all.
  • When an outage occurs give as much information possible as to what caused it and what is being done to correct the issue. When I get brief and non-technical responses at times it makes me feel unsure if you know exactly what is happening.
  • If an outage is due to a human error just say so! I’d much rather hear that someone has made a mistake (it happens to everyone) than get a vague response about what caused the issue.



SociBook Digg Facebook Google Yahoo Buzz StumbleUpon

Real Time Web Analytics Tool – Use Your Own Data To Help Build Your Traffic And Capitalize On The Traffic You Already Have.

Monday, June 27th, 2011

One of the coolest software applications that has surfaced over the past couple years is a real time web analytics tool called Woopra. This application is pretty remarkable in that it allows you to see your visitors in real time with a pretty intensive GUI interface. With this tool you can track visitors, see where they originated from, what search terms were used to find you, origin from social networks, backlinks and more.

With the data compiled, you can dive deep into your data, and perform true analytical research based on the rich data that is collected. That is why it is termed an analytic software as opposed to a mere stats program.

This program allows you to take the data you have and make important decisions on advertising, promotions, seo and more.  With Real Time Analytics you can actually make quicker decisions on things like new paid search campaigns you may have started, or banner advertising campaigns.

Lead Generation

If you are interested in finding out what companies are interested in your product and service this tool is very useful. Woopra has the ability to show Company Names that have searched your site, so you can actually see who might be interested in your product or service. It definitely helps when analyzing who might be a potential prospect to contact.


Chat Feature

Using a tool like this has many advantages also to being able to interact with your visitors and track  specific visitors. There is a built in chat feature, with two way chat initiation. A great way to have your site visitors interact with you especially when they are considering buying a product and want to speak with someone.

Multi Site Tracking

There are many features not described here, but one of the coolest things is that if you have multiple sites you can track them all from one spot. There is a web based version of the application, as well as a desktop download, so depending on what flavor you like you can choose between the two options.


SociBook Digg Facebook Google Yahoo Buzz StumbleUpon

Properly Disposing of Obsolete Web Servers

Friday, June 24th, 2011

Compliance and common sense both dictate that tossing old servers in the dumpster is definitely not a good idea. We recently disposed of hundreds of old and obsolete servers and while it’s not the first time we’ve gone through this drill, we still have to take great care to ensure that it’s done the right way.

The proper procedure and process for destroying obsolete servers really centers around (3) things.

1. Total and utter destruction of the hard drives
There are lots of ways that you can destroy hard drives but there is only one way to ensure that no data will ever be recovered from a drive. There are lots of choices such as using a giant magnet to wipe them. You can also connect them to a professional eraser which will completely wipe all the data off the drive and make it available for use elsewhere. Our choice is to have the drives put into a machine specifically designed to grind and into tiny little pieces. Magnets are great but they are not fool proof, neither is the machine that erases the data. However when you hand someone a hard drive and they hand you back a cup full of the metal shavings that used to be that hard drive, you can rest assured no one will be accessing any of that data that may have previously been stored on that hard drive.

2. Ensuring that environmentally unsafe metals & chemicals are dealt with responsibly and ethically
Again, you must pick the right company but if you do, you will be assured in writing and via certifications that any dangerous materials will be disposed of according to EPA regulations and will not wind up floating down some river in China. There are lots of companies who will be more than happy to haul your equipment off to the black market were nobody knows what happens with it, but your best bet is to avoid those companies at all cost.

3. All salvageable materials are recycled in a safe and responsible manner
If you work with a professional and licensed company, they will always responsibly disassemble the servers and make sure that any recyclable materials are separated into the appropriate groups (metals, plastics, etc), Those materials can then be sold by weight back to companies that reintroduce those recycled materials into newly manufactured products which helps the environment. That’s a win-win for everyone and it’s incredibly easy to.

So there you have it. When the time comes to destroy old or obsolete Web servers or network gear, be sure and choose the right company and make sure you get certifications of destruction. The company we work with scans every single hard drive before they grid them and they send us back a list of cross referenced service tags along with a certificate of destruction for each drive. That’s the only way to really be sure that you are protecting the privacy of your clients and being both environmentally and professionally responsible.

Almost forgot.. don’t expect to get any money for that old gear. The compensation you receive is having the equipment removed from your facility without cost. Likewise you should never have to pay for this service because the compensation they receive is the ability to recycle the materials they have obtained from you.

SociBook Digg Facebook Google Yahoo Buzz StumbleUpon

Cloud Computing Storage Options

Wednesday, June 22nd, 2011

One of the things that I find most interesting about cloud computing storage is “local storage” versus “centralized storage”. For a quick primer, local storage means the physical hard drives that reside in the servers that are used to run your instances in our cloud. Centralized storage would mean separate storage arrays that store your instances which are separate from the cloud servers. Since the option exists to select one or the other, let’s go ahead and break down the pros and cons for each one.

Local Storage:
If you select the local storage option, that simply means that your instances are running and being physically stored on the same servers. The upside to this is that your disk I/O speeds will typically be a bit quicker because everything is connected to the same bus on that server. This is really good if you’re running large database applications or have requirements for very fast disk reads and writes. The downside to this is that you give up the high availability options that are typically native to cloud computing. In other words, if that particular server goes down, your websites go down with it and they won’t be automatically migrated over to a different machine as that process has to be done manually because you’re not utilizing centralized storage. With local storage you are still able to make snapshots and restore from the snapshots to another available server, but it’s a manual process and does require human intervention. So even if you select local storage you still have the peace of mind of being able to automate snapshots and for those to be stored off of the server. Of course for a recovery, those snapshots have to be converted to a template and then the new instance would have to be spun up from those templates, but you will be back up and running again quickly. Again it requires manual intervention and while it takes less time than recovering from a typical dedicated server, it still takes a little time.

Centralized Storage:
If you select the centralized storage option, that means new instances are running on a local server but your actual instances are being stored on a separate storage device. So essentially the server that your instances are running on is being utilized solely for CPU and memory while all of the storage requirements are handled by a separate device which is attached to the network. The upside of this is the high-availability options which will just automatically work if the server that happens to be running your instances goes down. If that did happen, one of our management consoles would detect the failed server and will immediately scan the network for other available servers and instruct the server with the greatest amount of free resources to mosey over to the storage device and spin up those instances right away. This is much different than having to do a restore because there really is nothing to restore because your data is all still intact on the storage device. Free servers will spring into action, snatch up your instances and provide CPU and memory to them so they can spin up again and resume as normal. This entire high-availability recovery option typically gets completed in under a minute. So to recap, if the server that your instances are running on fails, others servers will take over operations within a minute without any intervention from human at all.

Hybrid Hosting:
Another option that is very viable and widely used is a hybrid approach which combines cloud computing and a dedicated server or managed server. If you have for example, (4) websites and you have (1) database that requires faster disk access, you can run your websites on the cloud using centralized storage for the (HA) high availability and only run your database on an instance that utilizes local storage for the speed. That way you get the best of both worlds, over-the-top high-availability for your websites and ultra fast storage for your database.

So as you can see there are plenty of options and it’s relatively simple to mix and match to find a solution that best suits your needs. We are always happy and eager to help come up with solutions for our clients, so let us know what we can do to help you.

SociBook Digg Facebook Google Yahoo Buzz StumbleUpon

What is Cloud Computing to me?

Monday, July 12th, 2010

When I look at cloud computing, the primary differentiator that keeps jumping out at me is the ability to quickly recover from failure. Since I have a group of servers that host various sites, I can fully understand what the benefits of cloud computing would mean for me.

Going back to the ability to recover quickly from a failure, let’s look at the tried and trusted method of recovering from the failure of a dedicated server. Let me preface this by saying that dedicated servers have proven to be an excellent platform for hosting sites both large and small. They give you complete control, you have 100% of the resources of the server available to you and you are completely isolated from other websites. However in the event of a failure, the restoration process can be tedious at best. In a perfect world your dedicated server would have a raid configuration and if you lost a hard drive, the system would automatically fail-over to the 2nd drive and notify you that the other drive had failed and needs replacement. This provides the opportunity to swap the drive in a very controlled manner and during a maintenance window. The restore process is fairly straightforward and has been done thousands upon thousands of times by various providers with varying degrees of success depending upon conditions. Backup and restore can be a tricky process and often times we are at the mercy of Companies who develop the software and hardware for backup systems.

Initially the problem must be identified and in this case let’s assume that it is a failed primary hard drive. The server has to be powered down and the failed hard drive has to be swapped. This can take go quickly or slowly depending on various circumstances and conditions. Then the server has to be brought online and the restore process from the backup systems is initiated. This step is relatively quick and provided there are no errors along the way the restore process should begin without incident. This is where it gets tricky though because depending on how much data you have, the restore and can either finish quickly or take a very long time. If you have a simple Linux server with a few gigs of data, that should restore very quickly. However if you have for example a Windows server running SQL Server and you have several terabytes of data to be restored, that might take a while. The real problem with this is that your server is down during the restore process and will be unavailable for your clients to access until it’s completed and the server has gone through a final reboot and system check. This is where cloud computing kills the dedicated server in my opinion.

Now let me outline the restore process for cloud computing. We refer to the backups in cloud computing as snapshots. The reason for this is that a normal backup typically does either a file by file or block by block backup of the entire hard drive or drives. Not only does this take a while but the format of those files which are more than likely highly compressed, are specific to your backup system and are in the format that your system requires to perform a successful restore. A snapshot on the other hand is literally just that, it’s like a photograph was taken of your hard drive in its current state and moved to a storage device. That snapshot is not a highly compressed and highly modified version of your data and operating system, it is a fully functioning duplicate that in the event of a primary failure, can simply be booted up. So the restore process is reduced from a series of steps that require lots of manual intervention and maybe even a technician to pull your server and do physical work on the server, to you simply clicking a button that says  “restore this snapshot”. Let me make sure that you understand this because even though this is an incredibly simple concept, people often times still don’t get it. So the system takes a snapshot of your cloud computing environment and instantly stores that snapshot on a storage device. When the system fails for whatever reason whether it is hacked beyond recognition, an angry ex employee went in and deleted all of your content or whatever the case may be,  you instruct the system to restore whichever snapshot you want and all it does his boot up that snapshot and your environment is restored. How cool is that.!

The other benefits of cloud computing are very obvious but the ability to recover quickly and completely from any type of failure is what really jumps out at me. Cloud Computing is still in its infancy but the writing is on the wall, the upside is crystal clear and I predict that eventually everyone will hop on the cloud.

~ Till next time

SociBook Digg Facebook Google Yahoo Buzz StumbleUpon