At Atlantic.Net, we provide backup, complete with remote hands and mustaches.
As we all know, information technology is a vast field with many different facets. Staying abreast of all pertinent information and prioritizing all aspects appropriately is a tall order. Often, enterprises and SMBs find that certain tasks are better handled by an outside entity that specializes in specific aspects of tech and has the tools available to quickly and accurately diagnose and solve problems.
Even if a company has an IT specialist, that person is sometimes so busy handling day-to-day needs that broader issues, such as network maintenance and security, cannot be given the focus they deserve. Hosting companies often provide various managed services to fit these situations.
Operating systems management, aka managed OS, is one such service. OS management allows an organization to stay current on patches, upgrades, and other elements of the OS, while keeping its own in-house IT professionals free to handle tasks specific to operation of the business. Below are several of the standard activities involved in business OS management.
Patch management is a core component of systems management. It involves locating or creating the best possible code – which is then used as a patch for a specific part of the site – to generally increase usability and efficiency. A particularly crucial part of patch management is the testing phase, but it’s just as important to monitor the site following the patch to determine if it is working completely as intended (an aspect of configuration management, as described below).
A couple of other core concerns are involved with patch management as well. One is continuing education: an administrator should be an expert on all patches and elements of code throughout the system. The reason that expertise is necessary is to prevent any patches – a new one and one applied months ago – from conflicting.
Needless to say, patch management also involves careful installation. If code is placed at the wrong point in the string, severe headaches and lost business can result. A skilled patch manager is extraordinarily conscientious about placing the patch in the precise location as was used when conducting tests.
Configuration management, broadly speaking, refers to the deployment and maintenance of software or hardware so that everything functions cohesively and uniformly throughout the infrastructure. The hardware and software applications of the term involve different practices but are identical in theory. Each aims for an organized and coherent setup of all elements of a computing system.
Configuration management for a business’s OS means that when changes occur, monitoring must follow. All aspects of configuration should be stored and easily accessible in a repository, with an individual or team who have a full understanding of its contents.
Examples of changes to configuration include the following:
- Deployment of patches
- Application installation
- New users or changes to user accounts/permissions
- Any maintenance elements.
Proper monitoring of systems configuration involves automated applications that present any adjustments as they occur. The repository mentioned above allows the manager to see any adjustments that have occurred, over time, to various elements of the system.
Proactive monitoring is again a broad IT term, with specific concerns for each subfield of computing. It is the responsibility of the engineers performing the monitoring to locate and fix any possible issues at all hours of the day or night. Proactive monitoring ensures that the system is operating smoothly, efficiently, and without any errors. Using applications that track data on the network, an individual proactively monitoring a system both reduces risk and develops insight into performance that can be used to further bolster the infrastructure.
Essentially, proactive monitoring is a form of surveillance across the entire operating system that – true to its name – solves problems before they have time to develop. It enables businesses to stay a step ahead of risk and protects sensitive data from corruption.
Infrastructure lifecycle management
In business as in life, everything has a lifecycle. The goal of infrastructure lifecycle management, when managing the OS for your business, is to make sure that all aspects of the infrastructure are both in line with the goals of the business and allow appropriate degrees of protection. Management of the infrastructure’s lifecycle – both with regards to its pieces and its entirety – is primarily a function of monitoring, much like the other managed services described above.
Proper infrastructure lifecycle management is not just about getting rid of old machines and buying new ones. Optimization of the infrastructure and the security of all data it contains are core aspects of lifecycle management as well. If you need to destroy data that is no longer of use, and you do not want it to get into the wrong hands, engineers handling lifecycle management can also be of help.
Managed services become especially of interest to many organizations as the holidays approach, which can often entail a tripling or even more dramatic impact on traffic. Here is some advice for how to handle the busiest times of year for your site, whether that’s the holiday season or otherwise.
By Brett Haines
A great way to increase the security of your site is to deploy two-factor authentication. Of course you want to have complex passwords because it makes it difficult for someone to guess the correct login credentials. However, today, hackers have a number of different ways in which to locate the passwords, including the following:
- on a PC that has been stolen or discarded
- on other sites, if an identical password is used there
- via key-logging malware installed on the PC.
In addition to passwords, to heighten your security, you can install SSL certificates on your server and use other forms of encryption, such as the Point-to-Point Tunneling Protocol (PPTP) used for remote access with virtual private networks (VPNs). A simple way to target the login process specifically, though, is to add an additional step, another “factor.” This method – two-factor authentication, or TFA – is now available and recommended for accounts with Google, Microsoft, Facebook, and others.
With two-factor authentication, in addition to inputting a username and password, another piece of login information is required. The most common way to utilize TFA is with temporary codes sent to the user’s cell phone. That obviously makes intrusion into the account significantly more difficult, avoiding data theft and possible lawsuits.
Setting up two-factor authentication for internal server logins
You can establish TFA for a variety of your own internal credentials and also for your customers’ accounts. Here are options to enable a second factor for your website’s administrators and content managers on various systems. Bear in mind before setting up any of these solutions that you will need all those who add content to your site or manage it to be prepared for the new system.
WordPress – If you use WordPress for your content management system (CMS), setting up TFA is incredibly simple. There is a plug-in called Google Authenticator Application. Obviously it was coded by a powerhouse tech company. All you need to do is install the plug-in. Keep in mind, you want all mobile devices of users to have the Google Authenticator app installed as well. However, the most important thing is having correct phone numbers. If the app is not installed correctly, it is possible to receive the temporary codes via text message or automated phone call from Google.
Joomla! – If you use Joomla! for your CMS, you have a number of different extension options. Because Joomla! is organized similarly to WordPress, it’s the same basic process to get TFA up and running.
Drupal – Finally regarding CMS logins, developers have created various modules for Drupal as well. Be sure to check the reviews for modules to ensure that you don’t run into any issues, which of course could be a major setback for your business.
cPanel/Plesk – Typically you will have the option within your hosting account CP to set up TFA as well. Much of the time, the authentication program that is available is the Google system. Again, make sure you are fully prepared for this change since adding a level of security makes it more difficult for legitimate users to access their accounts as well.
Setting up TFA for customer accounts
For two-factor authentication of customer accounts, you have two basic options:
- Paid solution – You can use heavy-duty TFA programs created by organizations specializing in security, such as Symantec. With any enterprise application you choose for your customers, you have the option to make two-factor mandatory or optional. Google and Facebook, for instance, allow users to decide for themselves. Depending on how sensitive the data is on your site, you may want to consider making it a necessary part of getting into accounts. Just as with preparing those within your own company for TFA, you want your customers to be notified in every possible way. You also want simple but thorough documentation and easy support access if anyone has trouble.
- Develop your own system – If you work with developers or have them on staff, you may want to consider developing your own system. Potentially this option could be less expensive over time, and you can design exactly as desired.
If you do go with the a paid or customized high-grade solution, the advantages are the following:
- TFA can be implemented for everyone – customers, site administrators, and all employees.
- You have a greater degree of options so you can figure out exactly how you want the two-factor authentication system to work for various scenarios. Perhaps two-factor authentication logins are more important to you when users are accessing specific areas of the site. Obviously your degree of control over the TFA program is particularly enhanced when you develop it yourself.
Using TFA is wise, but it isn’t everything. At Atlantic.net, we also recommend these best practices for e-commerce security, many of which have broader implications for any type of site.
By Brett Haines https://plus.google.com/u/0/100137311390909550920?rel=author
Dedicated servers and virtual private servers (VPSs) are two common hosting options for those companies too large or complex for shared hosting. Hosting clients typically wonder about the limitations of a VPS and whether it is worth the reduced cost to choose server virtualization over a dedicated server. Let’s briefly review exactly what makes a server virtual or dedicated before surveying a few basic differences.
Dedicated Server & VPS: definitions & hosting usage
You may have a general idea of what dedicated servers and virtual private servers are. However, a quick overview of these two terms will ensure it’s clear exactly how these two types of servers differ.
A dedicated server is a server that is charged with one particular task; hence the term dedicated. A dedicated server might be used to host your site, for instance, or an individual application. A server might also be dedicated specifically for email, for DNS purposes, or for gaming. Of course, you might choose to use a server for a number of different functions. The bottom line as a hosting customer, regardless of the technical definition of a dedicated server, is that the machine is fully available for your use.
When you use a dedicated server for hosting, you’re renting an entire server – the actual piece of hardware – from a hosting service. As long as you follow the guidelines within your hosting contract, you are able to customize the server any way you like. Other than limited access by the hosting staff for routine maintenance and to assist you as desired, you’re the only one using the server; it’s unavailable to other customers.
A virtual private server, as you can imagine, gets a bit more complicated. A VPS is a virtual machine (VM) that functions as its own server within a hosting environment. A virtual private server runs on its own instance of the operating system. On that OS, applications can run specifically within that section of the physical hardware. Virtualizing servers allows more than one operating system to be active simultaneously on the same server. Applications are also allowed full independence.
In a hosting environment, your VPS exists on the same physical machine as other companies/accounts; it has this feature in common with shared hosting. However, like shared hosting, VPS involves much stricter lines of demarcation. Operating with your own OS through virtualization means that your security is enhanced and you have much greater freedom to meet your needs by modifying the parameters of the VPS.
Dedicated versus VPS – Cost
VPS hosting is significantly less expensive than dedicated hosting is, simply because you do not need the full physical hardware with a VPS. The primary reason a VPS is selected for hosting is because it is a more budget-conscious choice.
Keep in mind, because a VPS only has access to a portion of the resources of the server, many customers prefer dedicated servers for their power. However, when VPS servers utilize a cloud hosting model (as ours do), they are optimized for scalability. In other words, with the advent of cloud technology, a virtual server is much better prepared for peak loads and rapid growth.
Dedicated versus VPS – storage & speed
Choosing a virtual private server, regardless of whether it uses a cloud model or not, is not standardly prepared for the same amounts of storage space and traffic as a dedicated server is. Clearly dedicated servers make sense for those who want to specify a particular type of machine with certain components and attributes they can preestablish. With a dedicated server, parameters are hard and fast, and you can add additional dedicated servers as needed.
VPS servers involve multiple accounts running on individual hard drives simultaneously. Naturally, access by a number of different hosting customers can decrease the speed and reliability of a site. You will also run out of room more quickly on a VPS server.
However, it’s again worth noting how virtual private servers that use cloud hosting differ from typical VPS offerings. In the past, a VPS was like a piece of the whole pie that was the physical server (which is still the case in non-cloud VPS environments, although features such as bursting and swap space allow some leeway). In that way, although a non-cloud VPS was not a physical server, its capabilities were physically limited.
With access to the cloud, though, a VPS can now scale easily, on demand, to meet the needs of a growing business. Adding additional storage and power is as simple as clicking a button in your administrative panel. Granted, VPSs are not for everyone. Because a VPS does not constitute a physical server, those in need of certain customization capabilities will want to choose dedicated hosting.
Dedicated versus VPS – selecting a host
Trying to decide between a dedicated server and a virtual private server can be tough. We hope that picking out a quality web host does not need to be nearly as difficult. At Atlantic.Net, we’ve been in business since 1994. With almost 2 decades of experience, we know you’ll be satisfied with whichever of our solutions you choose.
By Kent Roberts
The parameters of cloud hosting, like any form of cloud computing, can be foggy and unsure. The below comic expresses the dangers that can be found in a cloud’s gray areas.
Regardless of misunderstandings and a range of quality provided by various cloud service providers (CSPs), there is a legitimate reason why so many enterprises are shifting their attention and resources to the cloud. Beyond its general positive aspects (and we will discuss little-mentioned ones below), cloud technology has different implications for different types of businesses. Industries such as investing, marketing (both covered previously in this blog), and mobile applications can all benefit in different ways.
Mobile apps will be the focus today. We will briefly review broad cloud advantages – typical and atypical – and then specifically explore relevance to companies in the business of mobile applications.
Advantages of cloud computing – obvious & not so obvious
Joe McKendrick of Forbes mentions several of the main benefits of the cloud that are typically the point of focus. He also covers several unexpected cloud benefits that are conveyed less often.
Several of the standard cloud computing advantages are the following:
- Systemic flexibility
- Highly scalable
- Excess RAM available as needed
- Excess storage available as needed
- Extremely reliable regarding uptime and backups
- Low latency.
Here are several less frequently discussed but similarly compelling benefits:
- Agility – The cloud makes it extremely easy to get into new lines of business, or to avoid lines of business wisely, having conducted the applicable testing. Because cloud services are capable of expanding at a moment’s notice, you can test something right away with no long developmental window.
- Less caution when merging – One reason why many companies choose not to merge is because it’s so costly and labor-intensive to combine two huge pools of data. In some cases, data entry clerks are hired because performing the task with technology is viewed as prohibitively complicated. Cloud data is much more easily accessible.
- Tapping the popular mind – There is concern that cloud computing is so widely used that it has become a dumbed-down version of legitimate business IT. However, it’s precisely that wide net that now makes it so strong, due to a steady flow of ideas and improvements.
- Getting high-level – Pushing technology to the cloud allows tech executives to focus their time and money on crafting and planning, so they can lead rather than put out fires. Four out of every five IT dollars currently go to infrastructural maintenance, focus that can be directed forward with the cloud.
Two approaches to cloud-based mobile apps
Matthew Mombrea of ITworld suggests two basic approaches to developing and deploying a mobile app. These different avenues are not specific to the cloud, but cloud computing is a reliable framework for either angle.
- Quick/dirty – This strategy means you are moving fast, with your main focus being a general test-run with the public. You aren’t concerned upfront with availability to a massive audience. You want to make sure that the app is well-received before you think about potential problems of popularity.
- Slow/clean – On the other hand, you can painstakingly develop the app and ensure it is optimized to scale rapidly. You are investing the time upfront to make sure there are no hitches along the way. However, you are investing time – and potentially resources – that could have otherwise gone elsewhere if the app proves to be dead-on-arrival anyway.
Interestingly enough, Mombrea generally uses the first of the two game plans. It’s certainly a more reasonable approach that takes into account how biased we can be in favor of our billion-dollar ideas, which in many cases are actually thousand-dollar ideas.
Specific viability of cloud hosting for mobile apps
Kurt Marko of Network Computing believes the cloud is the new frontier for mobile applications. He believes this because the cloud is capable of resolving many of the issues that arise with mobile apps. The cloud is, in fact, just what IT needs to keep mobile app development manageable at a time when 70% of enterprises are currently developing, or in planning to develop, branded mobile apps.
Part of the difficulty with mobile apps, says Marko, is that because cell phones differ and because so little fits on the screen, native apps – ones designated specifically for certain devices – are desirable. This realization has been frustrating to the computer world because the web already went through a native application phase previously on PCs. Web-based applications were found to be better integrated, better optimized for syncing, and more secure.
The cloud, however, is righting the boat again for mobile apps. Tying a native mobile application to the cloud – via mobile backend-as-a-service (MBaaS) or general cloud app hosting – offers the following advantages:
- data is located in the cloud, in the realm of the tech staff rather than on the user’s device
- ability to place limitations on transfer of information, such as disabling ability to paste into an email client or another application
- automatic synchronization to the cloud.
Mobile application hosting on the cloud
Choose wisely when selecting a hosting partner for your cloud-based mobile app. As suggested by the above comic, don’t go with the cumulonimbus cloud. Generally, you want a hosting service where cloud expertise isn’t an afterthought but a point of focus. Host your mobile app with Atlantic.Net, where cloud is king.
By Kent Roberts
Firewalls come in essentially three varieties: hardware firewalls, software firewalls, and web application firewalls (WAFs). Typically a hosting company or datacenter infrastructure will take advantage of both of the first two types of firewalls for general use. The third type – the focus of this article – started gaining prominence about a half-decade ago (though there is overlap of these categories, as discussed below).
According to the nonprofit Open Web Application Security Project (OWASP), web application firewalls became more prevalent as hackers started focusing their efforts on apps (e-commerce stores, sales systems, etc.). Essentially, the apps provide different points of entry for intruders, so hackers started zoning in on them. That point of focus has many times allowed them to enter without being noticed (because standard firewalls have been centered on general network activity rather than the range of issues specific to web apps).
Why is a web application firewall necessary?
The basic need for firewalls specific to web apps is that Hypertext Transfer Protocol (HTTP) is relatively simplistic. Obviously, that protocol defines the back and forth of Internet interaction. Web applications, meanwhile, have become more and more sophisticated as time has gone on. The apps have outgrown the language used to communicate them in a sense, security-wise. Specialized protective software – the web application firewall – bridges the divide so that apps aren’t as vulnerable.
There is an additional disconnect between HTTP and web app security related to state. HTTP is stateless, and web apps are typically stateful. In other words, the latter utilizes previous processing information whereas the former does not. This disparity means an additional incompatibility between the two, beyond general complexity: essentially, a web app is “on its own” to establish its parameters and protect itself (enter the WAF).
What exactly is a web application firewall?
By definition (per OWASP), a WAF is a piece of software intended to protect a web app that is on the level of the application. Nonetheless, a WAF is not defined by the web app: it’s not a customized solution specific to that application but – similarly to a general software firewall – one that contains parameters to protect against intrusion into a wide variety of frameworks and scripts.
To be clear, there is overlap between the different types of firewalls. Software and hardwall firewalls are used in their own right to protect networks. However, WAFs – with their specialized function for web applications – can take the form of either of those two main types. They can be implemented either as hardware devices, installed as an actual physical piece of an infrastructure; or they can be used as software, installed on servers or integrated into other devices (e.g., they can be loaded onto hardware firewalls to enhance their protection with WAF capabilities).
Overall function of web application firewalls in an enterprise
Often a company is running dozens of web apps at the same time. Although an enterprise will typically consider the strength of some WAFs more important than others (based on the role played by the app it is protecting), it’s wise to remember that a system may only be as strong as its weakest link. Hackers could be able to access the network, potentially, through any of the firewalls. For that reason, apps that may generally be less vital to business operations should still be reasonably secure.
That said, because of budgetary concerns, systems administration often must place greater or lesser weight on the firewalls protecting certain apps. Here are a few questions that can be asked to strike the proper balance and understand which apps must have the highest degrees of protection:
- Does the app grant availability to sensitive details of any users of the system, whether internal or external parties?
- Does it allow access to proprietary documents or data?
- Does the app play a crucial function within the enterprise? How bad would it be if it went down?
- Is the app itself involved in network or any system protection?
App development & function of individual web application firewalls
Clearly the strength of each firewall should be as strong as possible, as discussed above. However, ideally a firewall is not crucial at the outset. Security should be a major factor for custom apps during their development. Loopholes in applications are patched as weaknesses become known, but problems discovered when an app has been used for a lengthy period can often mean more time and money for a fix.
A web application firewall comes in handy when it impossible or difficult to make changes to the application, or when the necessary revisions are extensive. The firewall is used when the app itself cannot be changed. Standardly a firewall uses a blacklist, protecting against individual, previously logged attacks. Additionally, it can also use a white list, providing allowable users and instances of interaction for the application.
Web application firewalls play an important role for companies worldwide. We believe strongly in our own firewalls and security at Atlantic.net. In fact, we believe so much in our reliability that we guarantee a complete absence of downtime. Click here to learn more about what makes us different.
By Kent Roberts
Reliability of a system is measured in uptime, the percentage of time throughout a given window that a site is up and fully operational. Network reliability and high uptime figures are crucial to keeping users happy. One important way reliability is enhanced is with redundancies – safeguards so that if anything fails, the client still won’t experience a problem. A strong system will always have multiple redundancies in place.
Another simple way to ensure strong performance within a network is to ensure that every piece of machinery is holding up its weight (and this actually has a redundancy component as well – see below). Whenever possible, it’s better to split work up among all of your devices rather than letting one piece perform all the work. Otherwise, you see strain on one machine as the others sit nearby, daydreaming. Server load balancing is the practice of dividing work evenly between various servers.
Basics of load-balancing & how clustering is related
When a load balancer is put into place, incoming traffic – requests for information from the servers from user web browsers – routes through the device prior to hitting the servers. That point all the traffic is reaching, where the load balancer is located, is one network address. As a load balancer receives the requests, it divides work evenly throughout a server cluster.
The reason the servers are a cluster is because a cluster is a number of computers operating in the same basic manner to achieve the same objectives. In a basic load balancing set-up, all of the servers behind the load balancer are performing the same basic function – equal work based on what comes through the load balancer.
Advantages of load balancing
In the absence of a load balancer, anyone who accesses a site is hitting the same server. That server is essentially being inundated with requests (during peak times or as the site is becoming more popular).
When an upswing in traffic occurs, people visiting the site will either experience slow page loads, or the server will start denying requests. Not only will these issues make those accessing the site frustrated, but search engines will punish the site as well.
Installing a server load balancer allows the speed at which the site functions to remain high even during times of exceptional traffic. Even if a server fails (and here’s the redundancy aspect mentioned above), it has backup. All of your resources are utilized to their utmost capacity.
Another major advantage of load balancing is that it’s a simple, easily deployable process; and it’s built for scalability. If you need to add more servers to your infrastructure, you just plug them in. The load balancer recognizes any new devices and continues to balance appropriately, with any new machines taken into account.
How more complex load balancers differ from simpler ones
Essentially, the range of load balancers can be understood in terms of how well they process different types of data. These data types can be understood in terms of the OSI (Open Systems Interconnects) model. Here are the seven layers, with the top layer representing the most sophisticated and working down toward the most basic:
- Application (Layer 7)
- Presentation (Layer 6)
- Session (Layer 5)
- Transport (Layer 4)
- Network (Layer 3)
- Data Link (Layer 2)
- Physical (Layer 1)
Every load balancer can accurately process OSI layers 2 and 3. As the devices become more sophisticated, they become capable of handling the top four layers. The reason load balancers must be more sophisticated to handle application information is that the data itself is more complex. Figuring out the degree of workload needed by a particular request requires a more intelligent load balancer, so that it does not accidentally overload one of the servers by misunderstanding incoming packets.
Other functions of load balancers
Generally speaking, load balancers use network address translation (NAT) so that the IP of the exact server being accessed is unclear to the client. This relates to the “one network address” aspect discussed above. The IP address listed with the load balancer is what appears to the client (browser asking for information) to be the server. In this way, the load balancer cloaks locations, providing a security function.
Depending how the load balancer is configured by its owner, the load balancer sends out to a server. The server processes the request and sends data back through the load balancer to the client’s browser. In other words, the load balancer is not just in charge of balancing traffic; it also properly organizes requests and responses so that nothing goes to the wrong server or client.
As you can see, load balancing is fairly simple, but using high-quality equipment and implementing the practice correctly are crucial to a strong infrastructure. Load balancing is one of the many safeguards we have in place for high redundancy and 100% uptime. If you are in the market for dedicated, VPS, cloud, or any other hosting solution, find out why Atlantic.net is the ideal choice.
By Kent Roberts
Colocation is one of the options for hosting offered at Atlantic. With colocation, you get space and bandwidth. Essentially, you are taking advantage of the expertise of a hosting company for providing the right type of physical environment for your hardware and its abilities to properly and effectively feed you onto the web.
The only thing that is different about colocation is that you are providing your own equipment. With any other type of hosting – dedicated, shared, VPS (virtual private server), cloud, whatever – you use equipment provided by the hosting company. Oftentimes, people like to be in control of the equipment. For one thing, they think of it as an investment. Additionally, they like to be able to customize each part if they want – to “build” the server like a custom car.
Along with having to go out and figure out what type of equipment you want, colocation also has the challenge of not being managed to the same degree as with other hosting solutions. With colocation, the equipment is all yours, so depending on the colocation facility you use, you may have to handle certain aspects of its maintenance. Of course, the climate control of the room, disaster recovery, etc., are still handled by the host.
As you can see, colocation is complicated, but it is a very popular choice, especially for companies with a growing number of servers. Let’s look at a number of different reasons why companies choose colocation so we can better understand this hosting option. I will review some ideas from the “young entrepreneur network” EntreRev and provide some thoughts of my own.
One reason to choose colocation, suggested by EntreRev, is 24/7 tech support. That sounds strange if comparing it to other hosting solutions with the same feature, but EntreRev is contrasting colocation to keeping your own server or servers in-house, as many small businesses do. EntreRev also notes the level of skill at a colocation center (and keep in mind, many hosting companies also function as colocation facilities – there’s a lot of crossover).
That skill level is an important point. If you hire an IT person or use an independent contractor, chances are they will not have the same server and data center expertise. IT is a massive field, so you want specialization. The tech professionals at data centers specialize in setting up equipment and maintaining it.
The infrastructure within a colocation facility that is “purpose-built” – built specifically with that usage in mind – is completely designed for all the needs of a tech environment. Furthermore, a quality colocation center or web host is thoroughly focused on redundancies. That means you don’t just get bandwidth, but you also won’t go through periods of time when you’re blocked from using it (downtime). In other words, the network is highly reliable because many checks and balances are in place.
EntreRev says that space is one of the largest initial factors that SMB faces with its technology. You can’t keep adding servers into a closet, because you eventually run out of space. Get a little bigger, and the same becomes true of a room. Turn to the idea of building a data center, and it’s unclear how large to make it. Do you make it triple the size you need currently, and leave a large part of it empty? Even if you can afford the upfront expense, size immediately becomes confusing. With colocation, rent the amount of area you need and bump it up as you go.
You will experience strength in numbers at a data center. That means that not only will you get your IT servicing for less and your space for less, but you will also be consuming energy with many other businesses. It’s been proven time and time again that a purpose-built data center excels in energy efficiency, reducing all clients’ power bills. That makes your business greener. You save money, but you also have all the benefits of environmental friendliness (such as use for marketing, etc.).
Again regarding strength in numbers, the rates for bandwidth and energy are reduced for bulk buyers such as a colocation facility. Massive amounts of each of these elements are negotiated by a savvy colocation center, and some of those savings are passed on to clients.
You will have multiple carriers to choose from, which enhances competition. This also get you better prices on bandwidth. If you have a problem with one carrier, you can switch to another. If there are outages with the carrier, jump ship immediately: that functionality is integrated into an adept colocation center. For example, at Atlantic, we automatically switch you over if one of the carriers goes down.
Colocation is not for everyone. If you feel you are ready to go out, get the equipment, and have more of the technical responsibility for your servers yourself, this option may be right for you. At Atlantic, we are proud of our Colocation services. We have worked hard to make them as or more sophisticated and reliable than our competitors, at an affordable price. To see what we have to offer, click here.
By Kent Roberts