The History of Cloud Computing

by (3 posts) under Cloud Hosting

Believe it or not, the concept of cloud computing technologies has actually been around since the mid-1990s; back then, it had a more complex name: “on-demand infrastructure.” How has this technology evolved over the past two decades to become the immensely powerful phenomenon it is today?

From basic web server architecture to simple database management and from less-than-technical email applications to minimal disk space, the original web hosting services we born around the 1990s and exploded in popularity during the dot-com era (1995-2001). Some of the earliest shared hosting companies included ValueWeb, Interland and HostGator.

The first shared hosting solutions offered multi-tenancy capability, automated provisioning, a monthly billing cycle and an easy-to-use interface for maintaining resources. However, these solutions did not inherently provide infrastructure on demand, resource-size flexibility or scalability. It was a simplistic offering but helped to create the foundation for the Cloud hosting industry.

Around 1998, virtual private servers (VPS) arrived on the scene. By offering some more flexibility and administrative root access, VPS solutions offered a significant step up from shared hosting capabilities of the past.

Early VPS hosting companies provided servers that offered occasional infrastructure on demand, slight resource-size flexibility, multi-tenancy, automated provisioning and the convenience of monthly, quarterly or annual billing cycles.

For businesses that needed stricter security measures and more stable resources, dedicated hosting solutions that were developed soon after the release of VPS did the trick. These servers offered more power along will complete administrative access and control of server resources.

These dedicated servers did not provide multi-tenancy, network flexibility or scalability. However, providers supplied both managed and unmanaged dedicated hosting options, giving customers the ability to choose between relying on professionals to maintain the architecture, or employing an IT department to handle it.

The launch of Amazon Web Services in 2006 really began to change the industry. Between 2007 and 2010, several managed hosting companies developed and released a more scalable and more virtualized Infrastructure as a Service (IaaS) offering. Today, this is referred to as grid/utility computing.

IaaS providers offer computers—whether physical or virtual—and other resources to customers. The earliest providers of utility computing included Layered Technologies, NaviSite and Savvis. These hosts offered infrastructure on demand, partial resource-size flexibility, multi-tenancy, occasional automated provisioning, partial scalability, a monthly, quarterly or annual billing rate and a slightly easy-to-use interface.

As discussed before, the development of Amazon’s Web Services really kicked things off in the way of cloud computing. In fact, the AWS system transitioned from a grid/utility computing model and moved toward what we can only call “Public Cloud Computing 1.0.”

Between 2008 and 2009, developers and startup hosting companies alike had the ability to compute and store data like never before, and, with time, they were able to eventually scale this data and infrastructure resources at a whim. Along with Amazon, Rackspace Hosting was the main component of this transition.

Cloud servers infrastructure on demand, partial resource size flexibility, multi-tenancy, automated provisioning, slight scalability, hourly billing (the first of its kind!) and a fairly easy-to-use interface.

The introduction of hourly billing in cloud computing 1.0 was a big deal, for both providers and customers. This model gave customers the ability to pay what they really should—not some previously agreed-upon subscription price. By narrowing the billing down to the hour, customers saved money, and this made them happy.

Today, we are witnessing a progression into the Cloud Computing 2.0 era. The next generation of cloud computing will need to be easier, more flexible and billed based upon a true utility model (like that of electricity and water) in order to provide customers with the services and products they need.

Current cloud 2.0 companies, such as Atlantic.Net, offer infrastructure on demand, fully customizable resource-size flexibility, multi-tenancy, applications on demand, network flexibility, automated provisioning, complete scalability, billing down to the minute or second and a simple drag-and-drop interface control for ease of use. Sure, cloud computing 2.0 companies are offering services that speak truer to the definition of the cloud than ever before, but we’re still not quite there.

In the future, the Cloud and the technologies serving as the backbone of the Cloud will need to cross the metaphorical river of development to attract a wider audience beyond that of organizations and enterprises. Small successful startups are driving innovation today, and cloud computing will need to become more saleable, more flexible and more based upon a true utility model in order to drive this innovation even further.

Since 1994, Atlantic.Net has stayed above the competition to fuel innovation and move technologies to new environments never previously imaginable. As the technology continues to evolve from 2.0 into 3.0 and beyond, you can rest assured knowing that you are relying on the most up-to-date architecture and standards in the business.

To learn more about the Atlantic.Net business model, see our full line of one-click cloud applications and to see how transitioning to a cloud hosting service can help transform your business, contact us today.

Related Posts

Stay Connected With Us