If you’re a science fiction fan like me, the term “artificial intelligence” recalls entities like HAL-9000, Skynet, the Cylons, and the fact that these creations have a bad habit of rising up against their creators and trying to kill us all.
While artificial intelligence (A.I.) has struggled to gain footholds in other niches, it is finding its place in the world of cloud computing, a sort of revolution within the revolution that could rapidly change the face of businesses using cloud computing solutions over the next few years.
What Is A.I.?
First contemplated and theorized in the 1950s, A.I. generally refers to the ability of machines to perform intellectual tasks. It has branched into several different sub-levels since then, including machine learning, which, beginning around 1980, enabled computers to learn and build models, deep learning, which uses neural networks, and cognitive computing, a product of the last decade or so, which can allow machines to interact naturally with us.
Machine learning (ML) and deep learning (DL) are the branches of A.I. most commonly linked to cloud computing. They can be harnessed for unsupervised work in analytics, predictions, and data mining – three absolutely huge parts of thousands of applications used by businesses in the cloud every single day.
Deep learning takes machine learning a step forward. Instead of using a singular algorithm to crunch data, it uses a legion of them, all related, to create new, deep networks without human oversight. Deep learning has been successful in image recognition, facial recognition, even prediction of diseases based on a patient’s electronic health records.
In three areas of cloud computing, A.I. is taking long strides. Those areas are ML algorithms, Big Data and parallel processing. Let’s drill down for a closer look at all three.
What Is Parallel Processing, and How Does It Work in the Cloud?
We took our kids to Seaworld over the Thanksgiving break and stopped for a ride on a little Ferris wheel in the play part of the park. Because the crowds were thin, only one employee was working the ride, which meant he had to walk the kids to their basket, make sure their seatbelts were fastened, secure the basket door, turn the ride on, turn the ride off, unbuckle the kids, lead them to the exit and do the whole thing over and over again. Imagine how the inclusion of just one more employee could have sped up this ride.
What can be tough for many of us to understand is that while computers are amazingly proficient when it comes to performing computations, there are many things we ask them to do that take enormous amounts of time. Some of the problems we want them to solve for us can take microprocessors hours, days, even years to solve on their own. One way around this is to build more powerful processors using nanotechnology, but that’s an expensive way of going about it.
Enter parallel processing, which simply means more than one microprocessor handling parts of the same overall task. Parallel processing essentially means that multiple processors shoulder the load. To have multiple processors working on the same problem at the same time, there are two big things you need: latency and bandwidth. If you’re unfamiliar with the term “latency,” it refers to the amount of time it takes for a processor to send results back to the system. The longer the wait, the longer it will take the entire system to process the problem. Bandwidth is a more common term, referring to how much data a processor can send in a given length of time.
While an impressive technology, parallel processing had not seen much use outside of academic institutions until recently, with the dawn of high-performance parallel processing. But seeing the value of using A.I. as the managers of these massive problem solvers, Google, Intel, Qualcomm, NVIDIA and IBM have all rolled out neural processors that can be managed by A.I.
How AI Uses Big Data
Big Data’s been the next big thing for a while now, but our ability to harness the meaning of the data lags behind our ability to collect it. A Deloitte study shows the amount of data comprising the ‘digital universe’ will hit 44 Zettabytes in 2020. A Zettabyte equals 1 billion Terabytes. The one constant of deep learning algorithms is that they require the ability to digest enormous amounts of data. The more they consume, the better they get at enhancing their decision-making abilities, taking on more and more responsibility inside of a business.
On the flip side, the rising amount of data demands a corresponding technology to both manage companies’ data centers and perform Big Data analytics. The same Deloitte release suggests that by 2020, each self-driving car on the road will generate 4,000GB per day, a smart commercial jet will generate 40,000GB per day and the average Internet user will generate 1.5GB per person per day. And that doesn’t even account for data stored in edge computing devices – those that are located close to the Internet of Things (IoT) device they serve.
As business enterprises increasingly need a massive data-crunching champion, cloud computing companies have begun to deploy Artificial Intelligence as a service (AIaaS). Once AIaaS is deployed, it can begin crunching data at a faster rate than any single microprocessor or human mind could ever hope to compete with.
Imagine if a pizza delivery service could integrate data on current traffic conditions into its delivery process, allowing continuous updates on the best possible routes for its drivers. The amount of time and gasoline saved, not to mention the upswing of customer satisfaction, would see the investment pay off in days, not years.
ML Algorithms for Cloud Applications
Led by Amazon, the big three among cloud computing companies raced to a big lead in the industry early on, offering software and infrastructure as a service to businesses. But that hardly makes the race to supply the world with the cloud over, as other companies are now grasping the idea of producing ML algorithms for specific cloud applications that will realize AIaaS.
The first of these is the most well-known: cognitive computing, which means the algorithm is combining pattern recognition, language processing and a whole lot of data mining to try and imitate human thought patterns.
The second use might get a roll of the eyes from those who have called a company’s 1-800 number and been disappointed when the automated assistant asks “I didn’t understand your request, would you like to return to the main menu?” These chatbots and virtual assistants are here to stay, and they are getting smarter every time they have a conversation. One estimate says that by 2020, 85% of all customer service conversations will be with virtual assistants.
A third is the rising power of the IoT, which would connect every potentially “smart” machine in the world to the cloud and add that massive amount of data to the conversation. The computational ability of ML will enable IoT devices to gather data and make decisions without actually sending said data to the cloud.
And perhaps the most exciting of the ML algorithms from an economic standpoint is a rapid expansion in business intelligence. As more and more data is fed into a company’s A.I. system, the more things can be automated, and A.I. will be better able to not only predict what is coming next but also start and complete smaller tasks in the background, thus rapidly increasing the speed of business.
Despite the worst fears of everyone from Isaac Asimov to the Wachowski Brothers, A.I. has not come to take over our world, but to improve the way we harness technology to make everything better. Consider the surface of A.I. finally scratched.