As we continue to rely more on technology, keeping our information safe is becoming increasingly difficult. With Wi-Fi being the standard form of network communication for most business professionals who are on the go, the need for secure data transmission has become even greater. Public Wi-Fi locations like coffee shops, the airport, and even your home and office are not safe when sending and receiving data. According to idtheftcenter.org[i], in 2015 alone there were over 177 million cases of identity theft reported.
The two most popular ways of someone accessing your data over Wi-Fi are sniffing and rogue access points[ii].) Sniffing is when another user nearby captures the data your computer transmits over Wi-Fi, and then reassembles it to look for passwords or other unencrypted account information. The aptly named rogue access point is where someone will create a Wi-Fi hotspot that appears to be legitimate, like “Free Starbucks Wi-Fi,” or “Airport Public Wi-Fi,” and then waits for users to connect to it. Once the user is attached to the hacker’s hotspot, the users’ data transmission is all captured on the hacker’s machine. The hacker can then use specialized programs to reassemble the packet capture to reveal what the user(s) was looking at and if any sensitive information or passwords were used. One of the most effective solutions is to encrypt the traffic going between your infrastructure and your home computer/laptop, which is why VPNs were developed.
Utilization of RAID 10 in a server provides an increase of disk capabilities while simultaneously providing redundancy and preventing system failure.
RAID is an acronym that stands for Redundant Array of Independent Disks or Redundant Array of Inexpensive Disks, depending on what specialist you ask. The term “independent” is arguably more appropriate, as RAID arrays may sometimes be made with extremely expensive disks.
In layman’s terms, RAID is a method of configuring two or more hard drives to work as a single unit with differing levels of redundancy and allowing better fault tolerance. “A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails.”[i]
One of the fastest and most damaging cyber security threats falls under a category called “ransomware.” Ransomware is malicious code that encrypts all the user’s files and is usually downloaded unknowingly. This type of malware gets its name from what it does when a user tries to open an infected file: it prompts the user to pay a ‘ransom’ within a timeframe to receive a decryption key, which would then allow you to decrypt your files. Even if you choose to pay the ransom, there is no guarantee you will gain access to your data. In this article, we will explain steps you can take to protect and secure your environment.
Ransomware is a real threat to any business that allows user access, as it depends on users to spread it. Different industries also have different risks, with healthcare usually opting to pay the ransom to protect patient data, while the education industry has the highest rate of infection. Other lucrative targets include classified documents, financial documents, and intellectual property. With names like Telecrypt, iRansom, FSociety, and CryptoLuck, the goal of ransomware is all the same for their creators: making money. According to Lavasoft, the CryptoWall 3 ransomware cost users $325 million just in 2015 alone. As ransomware grows and evolves, they become even more costly. At the end of 2016, one of the most harmful ransomware is named “Cerber.” Not only does it lock your files from being accessed, but recent variations have incorporated the stealing of personal information and scripts that cause your machine to target other servers.
Responsible businesses with sensitive data know they need a firewall to control traffic and secure their networks. What seems less well known, however, is the role that complementary technologies play in a comprehensive approach to cybersecurity. An Intrusion Detection System (IDS) enables organizations to take a proactive security stance, which is why Atlantic.Net offers one for its security-conscious customers.
Amid all the headline-grabbing data breaches of the past year, the vulnerability of companies in industries like health care may be overlooked. Data breaches began costing healthcare firms over $5.5 billion annually shortly after HIPAA became law, according to the Ponemon Institute.
Once online criminals have found a profitable target, they tend to return to it with ever more sophisticated attacks. A report recently indicated that over 75 percent of the healthcare industry had been infected with malware in the past year, and noted that a shocking majority of ransomware targets medical treatment centers.
Cliches like the typical hacker being a teenager living in his or her parent’s basement are persistent, and harmful because they misrepresent the situation to the potential victims of hacking. The numbers clearly show that hacking is now predominantly committed by sophisticated criminal organizations. Utilizing an IDS is a proactive approach to meeting that threat.
An Intrusion Detection System, or IDS, is a software application that monitors the network and hosting environment and analyzes activity on it. Any activity which is considered unusual is ranked according to how high risk it is considered based on information from global threat databases.
We are excited to announce the release of a new feature called Two-Step Login (aka: Two-Factor Authentication, 2TFA, TFA). This new feature provides you an extra layer of security when accessing your Cloud account via the Atlantic.Net Cloud Portal.
When you enable Two-Step Login, you’ll be required to provide a username and password like you normally do plus a randomly generated verification code.
You’ll be able to get the verification code by text message or by using a simple authenticator app for a smartphone.
Most services only have one layer of security to protect user accounts: a password. With Two-Step Login, even if a bad guy hacks your password, he’ll still need your phone to get into your account.
By: Kris Fieler
As businesses depend more on big data, the need to prevent data loss has never been more important. One of the most vital areas for this loss prevention is where data is temporarily stored, RAM. ECC, or Error-Correcting Code, protects your system from potential crashes and inadvertent changes in data by automatically correcting data errors. This is achieved with the addition of a ninth computer chip on the RAM board, which acts as an error check and correction for the other eight chips. While marginally more expensive than non-ECC RAM, the added protection it provides is critical as applications become more dependent on large amounts of data.
On any server with financial information or critical personal information, especially medical, any data loss or transcription error is unacceptable. Memory errors can cause security vulnerabilities, crashes, transcription errors, lost transactions, and corrupted or lost data.
Availability is one of the biggest concerns of information technology chiefs. The NIH ran a study comparing availability of cloud and dedicated machines. Cloud won.
Unfortunately for CIOs, there are many aspects of their role that can be stressful. For a survey featured in CIO magazine in 2015, 276 CIOs and other top IT leaders discussed the elements that can give them the most trouble; and the top three were security, availability, and making the right hires.
Let’s look specifically at the issue of downtime; in other words, the need to optimize availability.
Like many other top IT executives in the public and private sectors, a CIO at the National Institutes of Health, Alastair Thomson, is guiding his agency’s staff toward the cloud.
Science is ballooning. According to two bibliometric researchers, Ruediger Mutz of the Swiss Federal Institute of Technology and Lutz Bornmann of Germany’s Max Planck Society, the amount of published science is growing at 8-9% per year. “That equates to a doubling of global scientific output roughly every nine years,” explains the British journal Nature. “Bornmann and Mutz find that global scientific output has probably kept up this dizzying rate of increase since the end of World War II.”
Publication is of course not the only way science is growing, as CIOs at science-oriented organizations are reminded on an everyday basis by the scope of their projects. The data used for research used to be discussed in terms of megabytes, then gigabytes. Today, it’s typical for a project to be working at the level of terabytes or petabytes.
Vendors such as IGEL Technology, HP, and Dell are boosting their support for cloud desktops to be used with their thin clients. This trend shows how manufacturers are having to keep pace as cloud becomes increasingly prevalent.
In this increasingly cloud-based world, the makers of thin clients are modifying them so that their customers can seamlessly take advantage of cloud-hosted desktops and software.
The extent to which the business world has implemented desktops and software delivered through cloud hosting has expanded in recent years, with companies increasingly wanting to have third parties take care of managing the infrastructure. Meanwhile, the thin client market has been struggling, especially because low-end PCs have grown closer in cost. In order to stay in the game and get the attention of desktop virtualization companies, thin client heavy-hitters, including Dell, IGEL, and HP, have taken steps to support cloud-hosted desktops.
© 2017 Atlantic.Net, All Rights Reserved.