Chat with us, powered by LiveChat

Going All In On Serverless Computing with Google Cloud

going-serverless-google-cloud-blog-featuredLast summer, Google Cloud Functions headlined at Next ‘18 following its much-anticipated release to general availability. As companies rush in to refactor workflows into serverless functions that spin up only when called, tools such as Istio and Knative have evolved to make serverless more attractive to the enterprise. In turn, many companies are poised to give serverless workloads a shot.

The move to serverless, however, might not be as easy as dipping a toe in the water by refactoring one service. Going all in on serverless means making adjustments to dependent services, budgeting and developer roles. For example – as your serverless workload scales with demand, what about dependent downstream workflows or legacy services that cannot scale up and accept requests at the same rate as your serverless code? Upgrading one service to serverless while neglecting dependent services might look a bit like installing a brand new tire while leaving three road-worn ones on.

Likewise, as Cloud Functions scale up on demand (with no active need to manage resources), what happens if your service meets consistent abnormal spikes in traffic? In a traditional environment, network engineers would monitor traffic and manually scale up resources for new norms in traffic. In a serverless workload, it’s easy to imagine receiving a surprise bill after not checking activity because of auto-scaling functions. In this way, serverless also requires a new approach to budgeting – set notifications to monitor when you hit 50%, 75% of budgeted volume, and come up with a plan for routing high traffic tiers to dedicated resources. You especially don’t want your system to scale up to infinitely if someone is abusing your traffic!

 

Google Cloud Functions for Fine-Grained Serverless Control Compared to Google App Engine

App engine is already serverless and has been for a while, but it scales up to the highest used part of your app. Google Cloud Functions allows much more granularity – code that only spins up for the milliseconds you need it. These functions only respond to events in the cloud that your app calls, so it’s easy to imagine that all these triggering events can link together your entire cloud environment. And as Cloud Functions are automatically authenticated to the rest of your Google Cloud account, they can also be your link between Firebase, Google Cloud and even Google’s ML APIs as well.

There’s another distinction which might already be familiar to those using Google Firestore – for serverless databases, you only pay for operations that occur to the database. It’s a different notion, as you’re not paying for the database sitting there as you might with traditional Cloud Storage.

 

Functions Provide Another Level of Abstraction Atop Kubernetes, and Knative Simplifies the Build Process.

Serverless functions provide another level of abstraction on top of Kubernetes where developers can really focus on cool app stuff instead of build support. So, in this world, your developers are not thinking of backend infrastructure, they’re not thinking about deployment, and accounting is thinking of budget. They are free to be programmers.

As Functions as a Service offerings like Google Cloud Functions have taken off, Google and friends recognized a need to standardize the fragmentation around building, eventing and serving in a serverless environment. That’s where Knative began. In creating Knative, and making it open source, Google makes the prospect of running Google Cloud functions more attainable in a large-scale enterprise.  Knative builds on top of Kubernetes primitives such as pods and deployments, and it uses Istio for certain service discovery and network routing.

Knative focuses on Build, which makes it easier for developers to build containers from source code, Events, which pave the way for functions to publish and subscribe to streams like Google Cloud Pub/Sub (and act as the glue between your Cloud workloads), and finally Serving, which facilitates the smooth scaling from zero to infinity and back down to zero at will.

By combining Kubernetes, Istio and Knative, you can aim for the highest technically viable level of abstraction, which makes an applications code is much clearer while still being high performing and robust.

 

The Long Term Business Case for Serverless Using Google Cloud Functions and Knative

While refactoring your architecture to include serverless workloads can result in cost savings, a smoother development process, and a clearer deployment pipeline, it’s not always easy to do. As TechCrunch says, the business case for serverless boils down to having the highest long term development velocity. This allows an agile business to outpace competitors and react to market changes quicker on average.

How? Because a lean serverless environment removes the undifferentiated heavy lifting that bogs down developers today and allows them to work on code that builds your business’s competitive advantage.

Learn more about going serverless

Can you envision where you can tighten up your code by refactoring with Google Cloud Functions, but are worried about downstream effects on the rest of your infrastructure? Contact SADA Systems today to discuss how serverless workloads can save costs and increase development efficiency.