The last two decades have been an especially exciting time to watch technology. We slowly understood and embraced the Internet, we slowly wrapped our minds around the concept of a cloud, and now we get to watch the genesis of “Cloud 2.0”. So what is it? Where does Cloud 2.0’s distinction from the original cloud lie?
Cloud 2.0: A Focus on Data
It’s, forgive the phrase, a bit cloudy. Some have been quick to treat Cloud 2.0 as being the rising acceptance of hybrid on-premise/cloud environments. However there is an emerging sect who is specifying that Cloud 2.0 is actually all about the data. To further dig into the distinction, Cloud 1.0 offers businesses the the ability to do what they’re doing today on bare-metal or on-premises virtualization in someone else’s data center.
As Google’s senior vice president for Google’s cloud businesses Diane Greene recently explained to Computerworld,
“It’s just a given now that you have a more cost-effective and reliable way of computing. The 2.0 of the cloud is the data and understanding the data. Now that you’re in the cloud, how do you take advantage of it so your business can operate at a whole new level.”
In the case of Google Cloud Platform (GCP), this means that organizations can run virtual machines and maintain storage in Google data centers, thus leaving the machine maintenance, hypervisor maintenance, and considerations for power and cooling, to Google. In Cloud 2.0, the focus is not on simply running things somewhere else, but rather changing how businesses compute. Instead of running a VM with Hadoop on it (requiring the management of the VM and the underlying Hadoop software), Google delivers BigQuery, which allows users to run queries on massive datasets without concern for the infrastructure nor the software—only the data matters.
According to SADA’s Director of Cloud Platform, Simon Margolis (recently quoted in CRN addressing Google’s patent suit with Oracle), another distinction Cloud 2.0 introduces is regarding Google Container Engine (GKE). In Cloud 1.0, one would need to spin up cloud VMs, manage those VMs, install Kubernetes and Docker to manage and run one’s containers, and personnel would need to manage the resources consumed by such a cluster to ensure proper scaling. With GKE, Kubernetes is hosted by Google and automatically scales infrastructure up and down based on the customer’s logic. Furthermore there is no need for manual orchestration as GKE is able to provision and de-provision as needed.
In a nutshell, just like Cloud 1.0 allowed organizations to focus less on infrastructure—power, physical machines, disk storage, etc.—Cloud 2.0 allows organizations to focus less on virtualization, automation, and software, allowing them to instead focus more on their data and their application. Ultimately this is a further abstraction of the concept of a “data center as the computer” in which data centers are intelligent enough for customers to ignore much of the typical computing “housekeeping” and spend more time on their core competencies.
Cloud 2.0 is poised to be the cloud we know, better aimed to help organizations make line of business decisions than ever before.
The Rise of Big Data
Analytics determined that a very large number of people would tune in to House of Cards on Netflix. How? Netflix analyzed its data and found that not only did David Fincher’s The Social Network have a huge following, but that most viewers watched it in its entirety. Additionally, Netflix found that those who enjoyed Fincher’s work also frequently viewed films starring Kevin Spacey. Thus, House of Cards, the understated yet taut combination of Fincher’s style and Spacey’s execution became a predictably strong offering from Netflix that continues to drive engagement after several years.
So can the Cloud 2.0 offer insights to businesses, allowing them to make similarly informed decisions to guide their product offerings and corporate strategy?
Google thinks so. Greene, further explained to Computerworld during Google’s recent I/O event:
“The revolution of the cloud is about the economics of scale. It’s really about data. All of a sudden everybody can share the data… We’ve turned a corner in how we think. Machine learning generates incredible value to a company. It’s the ability to get insights you weren’t getting before. The cloud is enabling people to create a lot more value.”
-Diane Greene, Senior Vice President, Google Enterprise Business
While several cloud computing giants—IBM, Amazon, and Google itself—are all working on analytics, big data and machine learning mechanisms, Google has reason to have extra swagger in this arena. Other companies are exploring machine learning and data analytics tools, but Google—the behemoth known for developing complex and intelligent search and indexing algorithms for the entire Internet—has done far more to develop unique technologies for deep learning and analytics. Because Google deals with massive scales unlike any other organization, it has had to literally invent technologies like BigTable to store massive amounts of data. It has written the book on how to manage big data and has continued to innovate on how to best derive insights from such data.
Google Cloud Machine Learning, one of the more exciting players in Cloud 2.0 is, according to HealthcareITNews, “integrated with other offerings including BigQuery for processing large data sets, Cloud Dataflow for creating pipelines, Cloud DataLab for so-called data exploration, Cloud Storage, and DataProc, a managed services comprising Hadoop, MapReduce, Spark, Pig.”
Numerous integrated tools are in place to simplify the process of developing predictive models and deploying them to analyze data sets existing in the cloud.
The Power of Cloud 2.0 vs. On-Premise
Companies have always stored lots of data, previously housed on-premise. The problem was that there wasn’t a whole lot they could really do with the data. It was simply too costly, time-consuming and cumbersome to retrieve, parse and analyze the data packed away in their servers. Machine learning is poised to stand as the missing link. It feeds off of massive volumes of data and automates the process of identifying patterns to educate line of business representatives on ways to improve their batting averages.
Putting Predictive Analytics into Practice
Google Cloud Machine Learning is a managed platform designed to help teams build machine learning models. Data scientists can monitor data, develop TensorFlow models, train their models and analyze results. They can build models of any size on the managed scalable infrastructure, powered by GPUs, and models built within the framework will be immediately available for use with Google’s global prediction platform which supports thousands of users and TBs of data.
The platform is integrated with Google Cloud Dataflow for pre-processing and ETL, allowing IT teams to access data from Google Cloud Storage, Google BigQuery, and others. Once models are built, users can begin to predict patterns using online and batch prediction services.
The Future of the Google Cloud
Ever since Google secured Greene from VMware, she has been making the unimaginable seem practical–and she believes that the cloud can become the search titan’s main moneymaker. Considering Google’s incredible earnings from its ad sales, this is a lofty goal. And yet… not. Greene spoke with Wired and explained, “Once you get everything in the cloud, what it enables for a company is unbelievable. You can start applying machine learning and intelligence to everything you do. I don’t think we even know where this is going to go. It’s all about us taking our expertise and our capabilities and then going and understanding what the possibilities are for other companies, and this requires investment. We’re very serious about that.” Seeing the promise the cloud offers for businesses to make calculated, data-driven bets similar to Netflix’s content development approach, one can’t help but wonder—can Google really outdo itself with Cloud 2.0?