Cloud Hosting

Must-Have Plugins for WordPress Developers

Adnan Raja February 17, 2016 by under Cloud Hosting 0 Comments

WordPress is by far the most popular content management system worldwide. Let’s take a look at some of the most important plugins for developers.

  • Popularity of WordPress
  • Most Important Developer Plugins
  • One-Click WP Hosting

The growth of WordPress since its inception is mind-boggling, with a skyrocketing rise that rivaled the upward trajectory of Google or Apple or Facebook at their tipping points. WordPress is everywhere in part because there are so many useful plugins to easily expand functionality. However, the search for plugins can be a bit daunting and disorganized since there are tens of thousands of them – 42,945 official ones in the WP catalog at press time.

Let’s review the CMS’s incredible prevalence as well as the plugins that are the most helpful for developers.

Read More


What is: glibc (GNU C Library) Vulnerability (CVE-2015-7547) Patch and Information

Mason Moody February 17, 2016 by under Cloud Hosting 0 Comments

On Tuesday, 16 February 2016, Google security researchers Fermin J. Serna and Kevin Stadmeyer announced the discovery of a vulnerability in the GNU C library (called “glibc” or “libc6”, depending on the specific platform) that underlies many Unix/Linux systems. Similar to the GHOST vulnerability, exploitation of this vulnerability involves a buffer overflow that can cause a system crash or allow an attacker to remotely execute malicious code.

How It Works

When the Google researchers reported the vulnerability to the C library maintainers, they discovered that the bug had previously been reported in July 2015 (hence its 2015 CVE number). Red Hat researchers had been working quietly to understand the full extent of this issue, and they presumably, in conjunction with the Google researchers, waited until they had an effective patch in place before making their announcement public.

In short, this exploit leaves any Linux based cloud server that uses glibc and performs domain name lookups potentially vulnerable to attack. Specifically, the proof-of-concept the researchers have demonstrated employs specially crafted packets that cause the getaddrinfo() function to mishandle certain memory buffers, triggering a buffer overflow (a commonly used tactic among those who look for vulnerabilities to exploit). The publicly available proof-of-concept causes a server to crash; the researchers have withheld the proof-of-concept code that would allow for remote code execution.

Since so many server functions utilize the affected library–including sudo, curl, and ssh, to name a few–patching glibc is the safest path to protect your servers from this sort of exploit. While there is no evidence of this vulnerability being exploited in the wild, any server running version 2.9 or later of glibc should be updated to the patched version as soon as possible (if you are running a version of the C library older than 2.9, your best bet is still to upgrade to address any of the other known vulnerabilities that the intervening upgrades have patched).

How To Identify Your Version of glibc

You can identify your currently running version of the C library you are using on the command line with the following command:

ldd --version

 

Example of output from `ldd --version`

Example of output from `ldd –version`

Patched glibc Versions

Most repositories now have patched versions of the library available through their respective package managers, including the following (likely non-exhaustive) list:

  • CentOS 6: glibc-2.12-1.166.el6_7.7
  • CentOS 7: glibc-2.17-106.el7_2.4
  • Ubuntu 15.10: libc6 2.21-0ubuntu4.1
  • Ubuntu 14.04 LTS: libc6 2.19-0ubuntu6.7
  • Ubuntu 12.04 LTS: libc6 2.15-0ubuntu10.13
  • Debian 6: libc6 2.11.3-4+deb6u11
  • Debian 7: libc6 2.13-38+deb7u10
  • Debian 8: libc6 2.19-18+deb8u3
  • Debian Sid (unstable): libc6 2.21-8
  • Arch Linux: glibc-2.23-1

How To Update the glibc Vulnerability

The simplest way to update will be through your respective package managers.

CentOS/RedHat:

sudo yum update glibc

Ubuntu:

sudo apt-get install libc6

Debian:

sudo apt-get install libc6

Arch:

sudo pacman -S "glibc>=2.23"

Once you install the updated version, you will need to restart each service that uses the C library to ensure they are using the patched version. Your safest bet is to schedule a reboot of your cloud server.

Update (2016-02-18): Edited the Debian package names to correct the name of the C library package from “eglibc” to “libc6”.

Update (2016-02-24): Added the patched glibc version for Arch Linux along with update instructions.
.


How to Manage Python Virtual Environments

Brad Pitcher February 17, 2016 by under Cloud Hosting 0 Comments

Introduction

Using virtual environments for your Python project deployments allows you to isolate your project’s dependencies in an environment that is unaffected by the system Python installation(s). This makes it possible to run multiple projects with otherwise conflicting dependencies side-by-side on the same system. It also gives you peace of mind knowing that your code will run on any system with Python installed, regardless of whether that system has any of your required Python libraries installed system-wide.
.

Prerequisites

  • A desktop or cloud server running Windows, OS X, Linux or any other operating system supported by Python
  • A working Python installation
  • (optional) If you plan to install Python libraries that have C extensions (such as NumPy) then you will need to have a C compiler and the Python development headers installed.

.

What Is a Virtual Environment?

Think of a virtual environment as a complete Python environment that is entirely separate and distinct from any Python environment installed at the operating system (OS) level. When we say “environment” here, we are referring to the Python standard library and header files, any third-party Python packages ( including those with C extensions), and any binary executables related to these. Having a separate environment for each project you are working on makes it much easier to manage dependencies. When you activate a Python virtual environment, your PATH changes to prioritize the Python binary inside your virtual environment over the one(s) installed to your OS. Using the Python binary in the virtual environment also means you will be using Python libraries installed in the virtual environment rather than the ones installed on your OS.
.

Python Installation

The good news is that Python version 3.3 or later comes with a virtual environment manager (venv) built in. If you use an earlier Python version, however, you can install the virtual environment manager (virtualenv) using pip:

sudo pip install virtualenv

There is little difference between venv and virtualenv, and this article will stick to discussing the shared features.
.

Managing your Python Virtual Environments

Now that you have a virtual environment manager (either venv or virtualenv), it’s time to start creating some isolated Python environments!

To simplify the discussion, from here on out we will refer to the environment manager as venv, and you can assume that any commands will work equally well with virtualenv just by replacing pyvenv with virtualenv in the command. If you are using venv your installed venv command may have a version suffix, such as pyvenv-3.5. Use that in place of pyvenv in the examples given.

Now, to get started, open a command prompt to a sandbox directory and enter the command

pyvenv venv-test

You should now have a venv-test directory structure that looks like this:

venv-test
├── bin (called Scripts in Windows)
├── include
├── lib (called Lib in Windows)
├── lib64 -> lib
└── pyvenv.cfg (venv only)

This directory structure is very similar to what you will see in a system-wide Python install, but it stands alone.

Now let’s activate it. At the same command prompt:

source venv-test/bin/activate

# or in Windows:
venv-test/Scripts/activate

You may see your command prompt change to append your virtual environment’s name (in our case (venv-test)) to the beginning of the prompt. To see that the virtual environment is active, run

pip --version

The output should indicate a path for the pip binary within the virtual environment directory.

Using pip in a Virtual Environment

You can easily install packages to your new virtual environment using pip, just as you would in your “real” Python environment. For example, let’s install the Django web framework in this virtual environment:

pip install django

Notice how administrator privileges were not required? That is because the virtual environment is completely owned by the user who created it. Let’s take a look at our packages now:

pip freeze

The pip freeze command shows a list of all Python packages installed and the version that is installed. You should see that Django has been installed. Since we are running pip within an activated virtual environment, the list includes only packages installed to the virtual environment, not system-wide packges.

You can use pip in a virtual environment exactly as you would use the system-wide pip with one crucial distinction: administrator privileges are not required, which is very convenient. You can see all the Python packages for your virtual environment under the venv-test/lib/python?.?/site-packages/ directory (venv-test/Lib/site-packages on Windows) which closely mirrors where packages are installed in the global Python directory structure.

For a quick refesher on pip, check out our article on other useful pip commands

.

More Virtual Environment Options

As with most UNIX commands, you can add --help after pyvenv in a terminal window to get a complete list of options that can be used with the command. We’ll go over a few of them in more detail here.

–clear

You can use this option to clean out an existing virtual environment, so you can start from scratch. This option will remove any packages you’ve installed within the virtual environment. We could use this option on our example virtual environment like so:

pyvenv --clear venv-test

You can run the --clear option with the virtual environment activated or not. You can verify that the virtual environment is clear by activating it and running pip freeze.

–system-site-packages

This option makes it so that any Python running in the virtual environment can access both packages installed to the virtual environment as well as packages installed system-wide. It can be useful if you have certain packages installed system-wide that you often use and want to use in all the projects you work on and the version of the package isn’t critical. We could use this option to create a new virtual environment like so:

pyvenv --system-site-packages venv-system-test

Now if you activate the new virtual environment and run pip freeze, you will see all your system-wide packages as well as the packages installed only in the virtual environment.

–without-pip (–no-pip)

While pip is a wonderful tool that most of us Python developers rely on completely, there may be legitimate situations where one might not want to include pip in a virtual environment. For example, if storage space were at a premium and the virtual environment is for a production deployment where you don’t need pip, it could be useful to create a virtual environment without it. You can create a virtual environment without pip like so:

pyvenv --without-pip venv-nopip-test

Note: the option is different in virtualenv.

virtualenv –no-pip venv-nopip-test

.

Deactivation of a Virtual Environment

When you are finished working in a virtual environment, you must deactivate it to return to your system’s global environment. Otherwise, any commands you make in the same terminal window will be in that virtual environment.

deactivate

Alternatively, you could simply close the terminal window.
.

Additional tools

While virtual environments are an incredible tool in their own right, the Python community has come up with a few ways to make virtual environments even better to work with. We’ll discuss a few of them in more detail below.

virtualenvwrapper

One of the most popular tools used with virtual environments is virtualenvwrapper. You can install it by running

sudo pip install virtualenvwrapper

# or, on Windows:
pip install virtualenvwrapper-win

virtualenvwrapper organizes all virtual environments in one place, provides convenient wrappers for managing virtual environments, and gives you tab completion for commands that take a virtual environment as an argument.

autoenv

Another popular tool that goes beyond the basics of virtual environments is autoenv, another hit from the prolific Kenneth Reitz, creator of python-requests.

sudo pip install autoenv

With autoenv, you can automatically activate virtual environments when you use cd to enter the directory. This addition really comes in handy for local development.

tox

If you work on Python packages that support several different Python versions or different versions of a major dependency, like Django, tox can help manage the prospect of testing these different environments.

sudo pip install tox

When you specify all the different environments you want to support in a tox.ini file, you can then run the tox command to run your unit test suite in all the environments specified. tox automatically creates virtual environments for all the different Python versions specified, installs required packages, and runs the test suite.
.

Final Thoughts

I hope by now you are convinced of the usefulness and necessity of Python virtual environments, in both development and production environments. Try them out, along with the additional tools provided by the community, and find the workflow that works best for you. And if you have an idea for improvement, share it. You’ll find that the Python community is very receptive to hearing new ideas.
.
.


What Is File Compression?

John Papiewski February 15, 2016 by under Cloud Hosting 0 Comments
Target audience

This article is good for general audiences and provides an introduction to data compression techniques and uses.
.

Introduction

File compression is a technique for “squeezing” data files so that they take up less storage space, whether on a hard drive or other media. Many different kinds of software, including backup programs, operating systems, media apps, and file management utilities, use this technique. While the type of source file and the type of compression algorithm determines how well compression works, a compressed set of an average mix of files typically takes about 50 percent less space than the originals. This technology has applications ranging from archives and backups to media and software distribution.
.

Effectiveness

Most compression techniques work by reducing the space redundant information in a file takes up. The more redundancy the compression algorithm detects, the smaller the compressed file becomes. Text files, for example, may have many repeated words or letter combinations that can produce significant compression–as much as 80%, in some cases.

Databases and spreadsheets often also make good candidates for file compression because they, too, typically have repeated content. Conversely, files that have already been compressed, such as MP3s and JPEGs, have low redundancy. Compressing them further yields results only a few percent smaller than the originals–in some cases, they may become slightly larger when compressed, since the compression can add a small amount of management data to the file.
.

Lossless vs. Lossy Compression

Compression comes in two basic types, lossless and lossy. A lossless compressed file retains all information so that decompressing it restores the original file in its entirety. Most lossless compression algorithms build upon the work Abraham Lempel and Jacov Ziv pioneered in the late 1970s in creating the algorithms that would be called LZ (many subsequent compression algorithms build upon this work, so their names begin with this pattern: LZO, LZW, LSWL, LZX, LZJB, etc.). The algorithm uses an adaptive technique that analyzes the source file for strings of characters that repeat. The larger the string it can find, and the more often that string recurs through the file, the more it can compress the output file. Documents, spreadsheets, and similar other files are often compressed with lossless techniques like these LZ-based algorithms.

Lossy compression can often produce more compact results by discarding data that may not affect the final resolution of the file. Files relying upon human perception often utilize lossy compression, since the source material may have more resolution than we can realistically perceive. For example, a photo in its raw form may take 5MB, but if you want to use it on a web page, using that photo would cause the page to load more slowly. Using an image editor and lossy compression, you might create a compressed version of that photo that is 200KB. It may lose some of the clarity of the original but is still perfectly usable and is far quicker to download.
.

Archiving

It is frequently convenient to package many files and/or folders into a single compressed file, such as for emailing a collection of files or distributing a complex software application. This packaged collection of files is called an archive. Some compression programs also let you combine multiple files together, providing the dual benefit of smaller space and archival packaging. Other programs, particularly in the Linux/Unix domain, only handle compression of one file at a time. Archiving usually requires a separate program.
.

Windows Compression Software

PKZIP, a commercially-available utility program first introduced in the late 1980s, has become a de facto compression standard for the Microsoft Windows environment. PKZIP compresses, decompresses, and allows the creation of complex archives, saving them with the file extension .zip. In recent years, Microsoft has bundled PKZIP technology into Windows, allowing the operating system to automatically recognize and open most zip files. Open-source compression utilities are also available, such as Peazip, 7-Zip, and gzip. Windows has its own built-in software that lets you designate files, folders, and entire drives as compressed, extending the capacity of storage media.
.

Linux Compression Software

Linux has several different useful utilities for file compression, such as bzip2, gzip, and xz. These utilities are single-purpose and compress single files only–they do not by themselves create archives. The tar package (from “Tape ARchive”) often does archiving in conjunction with other utilities. Linux, like Windows, uses the combination of compression and archiving to reduce the space some files (such as log files) take up.
.

Conclusion

File compression lets you pack more data into a given amount of storage space. In addition to saving space on hard drives and other media, compression can dramatically improve the speed of file downloads. The technology is available as an integral part of most modern operating systems or as stand-alone programs.

 

Atlantic.Net

Atlantic.Net offers state of the art cloud servers to handle huge amounts of data for over 50,000 customers on a daily basis.   Redundant backup, excellent customers service, and technical support go hand in hand with our popular hosting solutions like Cpanel and Windows Cloud Hosting.
.
.


What is PHP7 Vs. HHVM

Thomas Meier February 10, 2016 by under Cloud Hosting 0 Comments

Origin Story of the Rivalry Between Facebook and PHP

Back in 2010 Facebook developers announced that they had been working on a solution to the rising costs of running Facebook’s cloud servers. Due to the ever-growing resource demands being placed on Facebook, they needed to develop a solution that would not require them to make substantial changes to their source code but would still offer optimized performance.

Their solution was “HipHop for PHP” (HPHPc), which translated the PHP codebase of Facebook into heavily optimized C++ and then compiled that code with g++. The project’s success was considerable–Facebook boasted that they were able to reduce CPU usage on their web servers by around 50%, translating to a significant reduction in the overhead cost of running their servers.

Because of this success, Facebook open-sourced the project with the goal of showing the web community how easy it was to enhance website performance without having to make major changes to their codebase. Facebook developers often stated during presentations that PHP’s interpreter needed optimization and that that was one of the primary reasons behind their development of the Hip Hop Platform.

PHP’s development team may have taken this criticism by Facebook to heart when they began development of the next generation of PHP; the result of PHP7’s development (which began in 2014) has produced in significant performance upgrades when compared to PHP 5.X. They started by analyzing PHP’s performance on popular platforms such as WordPress, Drupal, and phpBB. By focusing on enhancing performance on these popular web platforms, the development team was able to make small but numerous changes to the existing source code. After over a year of finding, improving, and testing these changes, they found they were able to considerably reduce the source code of PHP, resulting in a faster and more lightweight programming language.
.

The Goals and Features of HHVM

After the success of HPHPc, Facebook decided to invest in developing HPHPc more to push for a more substantial increase in performance and a greater reduction of resources necessary to run one of the internet’s most high-traffic websites. Their next step was to create a virtual machine to execute the HPHPc code, a VM that came to be called the HipHop Virtual Machine (HHVM).

HHVM is a virtual machine that executes programs written in PHP or Facebook’s own Hack programming language. It works by taking PHP or Hack code and converting it into untyped bytecode and metadata, using their own Abstract Syntax Tree to convert PHP and Hack into bytecode that they call HipHopByteCode (HHBC).

HHVM then analyzes the HHBC and converts it into a typed Intermediate Representation(IR). HHVM then converts the IR into x64 machine code and executes the program off the x64 machine code.
.

Summary of HHVM Execution Flow:

PHP source–(parse)–> Abstract Syntax Tree–(emit)–>Bytecode–(analyze)–(intermediate representation)–(codegen)–> x64 machine code.
.

Why Does HHVM Use C++?

It may be confusing to some as to why you would convert PHP code to C++ code. Facebook argues that C++ gives developers a much-needed balance between performance and maintainability, noting that C++ offers many convenient features, such as:

  • virtual methods
  • multiple inheritances
  • templates
  • macros
  • reinterpretcast vs. dynamiccast
  • plain old data vs. constructors/destructors
  • raw pointers vs. references vs. smart pointers
  • stack allocation vs. malloc vs. new

One important thing to take away from these features is that you have to know what your bytecode will look like at compile time to be able to properly customize HHVM to optimize your code.
.

The Goals and Features of PHP 7

PHP 7’s development team sought to improve the performance of their source code with the end goal of having the ability for any website to update to PHP7 to see a substantial increase in performance without having to make any changes to the site’s current code.

They boast that, with the source code streamlining, the decreased memory usage, and the inclusion of an Abstract Syntax Tree to boost the performance of the PHP parser, users can see as much as a doubling in performance compared to PHP 5.x. The addition of a secondary file-based cache for OPcache also further augments the OPCache introduced in PHP 5.3.x.
.

The Major Difference Between HHVM and PHP 7

While HHVM and PHP7 have the same goal of improving the speed and performance of executed PHP code, they both have vastly different ways to accomplish these improvements. One noteworthy difference between the two is the inclusion of the “Just in Time” (JIT) compiler in HHVM. The PHP 7 development team attributed this inclusion as the reason many early benchmarks showed HHVM outperforming PHP 7. So while much of PHP7’s efforts focused on optimizing the source code of PHP, they were also able to include an Abstract Syntax Tree (AST) in the source code for the language. With the inclusion of the AST, PHP7’s development team has set up for the inclusion of a JIT compiler in a future version of PHP 7. A JIT compiler should allow PHP7 to significantly outperform HHVM in any non-Facebook application.
.

Who is the winner?

Despite the rivalry between PHP’s and Facebook’s development teams, both acknowledge that performance will widely vary from system to system. Because both projects are open source, they encourage community feedback to help them further develop and optimize their code. Developers with a large PHP code base now have the potential ability to increase the performance of their PHP code base with HHVM. For those who mostly handle small websites that don’t quite have the resource requirements of Facebook, the optimization and performance updates that PHP 7 brings will help enhance user experience so websites will be able to run as fast as those with any large code base do.
.
.


Features to Simplify Your Cloud Hosting Comparison

Adnan Raja February 5, 2016 by under Cloud Hosting 0 Comments

Which cloud hosting provider is the best option out there? When you look at infrastructure-as-a-service companies, you will find that the public cloud offered through Atlantic.Net is stronger than those of major competitors including AWS, Digital Ocean, Linode, and Rackspace.

  • Cloud Hosting Comparison
  • Features at Atlantic.Net
  • The Choice is Simple

Cloud Hosting Comparison

Every cloud provider out there wants to differentiate itself from the competition. The fact is that some companies try to outperform their rivals primarily through branding and becoming a recognized name rather than by developing a truly higher-quality product. That is why you see a lot of disappointment in many cloud hosting reviews.

Read More


Elasticsearch Distributed NoSQL Database – What Is It and Should You Use It?

Sam Guiliano February 2, 2016 by under Cloud Hosting 0 Comments

Are you trying to decide whether or not Elasticsearch might be right for your company? Here is a look at its benefits.

  • What is Elasticsearch?
  • Features
  • One Programmer’s Perspective
  • Strong Elasticsearch Hosting

What is Elasticsearch?

Elasticsearch is a full-text, distributed NoSQL database. In other words, it uses documents rather than schema or tables. It’s a free, open source tool that allows for real-time searching and analyzing of your data. People appreciate this system because it allows you to run metrics on your data immediately, so you can understand it right away, on an ongoing basis.

Read More


What is: Solid State Drives (SSDs) – A Non-Expert’s Guide

John Papiewski February 2, 2016 by under Cloud Hosting 0 Comments
Target Audience

This article is intended for non-specialists wanting to know a little bit more about SSDs.
.

Introduction

The solid-state drive (SSD) is a relatively recent addition to the technologies available for mass data storage. In place of the spinning magnetic disk used in hard disk drives (HDDs) since the 1950s, an SSD relies on solid-state digital chips to store information. In recent years, SSDs have seen increasing use in many computer systems, from laptops to commercial web servers. Although SSDs offer clear benefits such as faster performance, the technology has a few limitations worth considering.
.

SSD Technologies

The marketplace currently offers a few different SSD memory chip technologies, each intended to fill specific needs. Among these is a low-cost NAND flash device called Multiple Level Cell (MLC) that gives the most bytes for the dollar, and performs well enough for consumer use. At the high end is Single Level Cell (SLC), which costs more but is faster and has longer device life.
.

Advantages

SSDs have no moving parts and can store and retrieve data faster than a traditional hard disk drive. In an HDD, a mechanism scans the data recorded on the surface of a spinning metal platter. Due to physical inertia, it takes a few thousandths of a second to locate and fetch information. Although this seems quick, solid-state memory, not having that physical inertia to deal with, can perform much faster. In general, an SSD will outperform an HDD by up to a factor of 1,000, with random reads/writes racking up the biggest improvement, and sequential writes showing the least. In addition to faster speed, the lack of a motor-driven mechanism means the SSD is completely silent. SSDs are also more rugged than their mechanical cousins, standing up better to everyday bumps and jolts. Most SSDs also consume less power and physical space than HDDs, so they are a growing and popular choice in storage for laptops (not to mention tablets).
.

Wear and Tear

Although SSDs have no mechanical moving parts, each memory cell degrades electrically when writing new data over old. This means the drive can read data indefinitely but writing takes its toll on the memory chips. Depending on the specific SSD technology, any given memory bit can be rewritten from 5,000 to 100,000 times. When a bit is degraded, it can no longer reliably hold data. At this point, the drive’s controller circuit automatically moves the data from it and neighboring memory cells to a “fresher” area and marks the worn area as “out of service.” The controller skips the marked area for all future use.
.

Wear Leveling

Because of the memory wear issue noted above, most modern SSDs now come with wear-leveling technologies that keep track of where data is written. This technology avoids repeatedly over-writing the same physical bits and spreads the wear throughout the drive. Wear-leveling prolongs drive life and postpones the time at which the drive takes blocks out of service.
.

Over-Provisioning

SSD manufacturers build extra capacity into each drive, amounting to roughly 7 to 30 percent of the rated capacity. This practice, called over-provisioning, ensures that the drive maintains its rated capacity for the reasonable operating lifetime of the drive, despite losses from worn bits. The extra room also helps maintain drive performance as the drive fills with data. In addition to factory over-provisioning, you can manually adjust the drive’s overhead space with utility software.
.

Other Limitations

As of 2016, SSDs come at a premium price, offering less capacity for each dollar spent compared to HDDs. The storage capacity of a given SSD also tends to be somewhat less than HDDs, so if you manage many terabytes of data, you’d need more SSDs than you would the cheaper and larger HDDs.
.

Formats and Compatibility

Many consumer-grade SSDs are available in the 2.5-inch standard drive format used in laptop PCs. Other formats include mSATA and the newer M.2; these slim, card-style designs use a fraction of the space of traditional drives, attaching directly to the motherboard through PCIe or a dedicated socket. Manufacturers make available adapter brackets that allow you to fit these new form factors into the older 3.5-inch HDD slot or into interfaces that might not exist on some older motherboards.
.

Consumer vs. Enterprise Grade

Solid-state drives come in consumer or enterprise-grade units. Consumer-grade drives tend to be less expensive but still well-suited to everyday personal tasks. Enterprise-grade drives tend to be faster and more expensive with enhanced, brownout-resistant power supplies and memory chips that hold up better under continuous, write-heavy workloads.
.

SSD Economic Trends

Since the 1960s, solid-state memory chips have increased over a millionfold in capacity. This dramatic trend will likely continue as semiconductor makers push the limits of their science. The price of SSDs has steadily fallen over the past several years, and capacities have risen. It seems all but inevitable that SSDs will overtake HDDs at some point, rendering them obsolete.
.

Conclusion

Solid-state drives represent serious competition to traditional HDDs. Although currently more expensive and not without their own technical issues, they are clearly faster than mechanical hard drives, and as the technology advances, the benefits will only improve with time.

 

Atlantic.Net

Since 2010, we have been offering industry leading cloud hosting and have upgraded our solutions to include fast SSD cloud Servers at our 6 worldwide cloud centers.
.
.


How to Install and Configure Fail2ban on CentOS

Jason Mazzota February 1, 2016 by under Cloud Hosting 0 Comments
Verified and Tested 4/28/16

Introduction

Fail2ban is a great, wonderful service that is primarily used to stop brute forcers from accessing your system. It’s simple to install and configure and works great at deterring your basic attackers away.

This article is specifically for installation on Centos. To install  and use Fail2Ban in Ubuntu and Debian, check out our how-to on that here.

Installing and configuring Fail2Ban on CentOS

We will be performing steps below as the root user. You will just need to sudo if you are using another user. For all editing of configuration files, we will be using vi. However, you can use whichever editor you are comfortable with. This installation is performed on a clean CentOS 6.5 64bit Cloud server.

Read More


How to Configure LVM (Logical Volume Management) on DRBD (Distributed Replicated Block Device)

Paul Cortes January 26, 2016 by under Cloud Hosting 0 Comments
Verified and Tested 1/20/16

Introduction

This how-to will help walk you through adding LVM to DRBD. Distributed Replicated Block Device (DRBD) is a block level replication between two or more nodes and is used as a replacement for shared storage by created a networked mirror. DRBD is used in environments that require systems or data to be Highly Available.

Prerequisites

* Two servers running Debian GNU/Linux Distribution. Other versions of Linux will work as well, but the installation packages may be different. If you need a server, you can spin up a fast and reliable cloud hosting server from Atlantic.net in under 30 seconds.
* Both servers should be directly cross-connected together, or have a separate Network Interface for private communication.
* Both servers should have the same partitioning. This walkthrough assumes that both systems have a single /dev/sdb device that is going to be used as the
DRBD volume.

Read More


New York, NY

100 Delawanna Ave, Suite 1

Clifton, NJ 07014

United States

San Francisco, CA

2820 Northwestern Pkwy,

Santa Clara, CA 95051

United States

Dallas, TX

2323 Bryan Street,

Dallas, Texas 75201

United States

Ashburn, VA

1807 Michael Faraday Ct,

Reston, VA 20190

United States

Orlando, FL

440 W Kennedy Blvd, Suite 3

Orlando, FL 32810

United States

Toronto, Canada

20 Pullman Ct, Scarborough,

Ontario M1X 1E4

Canada

London, UK

14 Liverpool Road, Slough,

Berkshire SL1 4QZ

United Kingdom

Resources

We use cookies for advertising, social media and analytics purposes. Read about how we use cookies in our updated Privacy Policy. If you continue to use this site, you consent to our use of cookies and our Privacy Policy.