scaling APIs with Kubernetes
October 8, 2020

How to Run Your APIs at Scale Using Kubernetes

Cloud
API Lifecycle Management

In an earlier post, Taming the Kubernetes Revolution, we looked at how containers revolutionized the computing landscape, how they work, and some of the basic architectural components that make up the Kubernetes stack.

In this post we will examine Kubernetes in the enterprise, Kubernetes-based platforms, and how to run your enterprise APIs at scale using these technologies.

Back to top

The Case for an Enterprise Kubernetes Platform

To realize all the benefits of containers, organizations need an orchestration and management platform to reduce complexity and streamline the development and operations processes.

Enterprises across all verticals are embracing container technologies at break-neck speed to transform their development cycle, re-architecting legacy applications, and to ease the deployment process to minimize risk. Looking at the 2020 IDG Cloud Computing survey, 64% of IT (Information Technology) decision makers indicated that they are either researching, piloting, or already using these technologies today.

One of the main attractions of containers are their autonomous nature. They package everything needed to run an application, including the application binaries, configuration files, and dependent libraries into a neat isolated “box” that does not interfere with other applications running on the same host or cluster. This level of isolation is invaluable as it allows developers to code, test, and deploy at a much greater speed than before and thereby allowing the business to be agile and better equipped to deal with change.

Containers also makes it easy to “lift-and-shift" complete application workloads from on-premises to the cloud or vice-versa. It has also proven to be a catalyst for enterprises adopting a multi-cloud strategy.

kubernetes and APIS

Some of the key drivers for adopting a multi-cloud strategy include:

  • Best-of-breed platform and service options
  • Cost savings/optimization
  • Improved disaster recovery/business continuity
  • Increased platform and service flexibility
  • Avoiding vendor lock-in

Like most things in life, adopting a multi-cloud strategy also comes with its fair share of challenges to be aware of like:

  • Increased complexity
  • Increased training and hiring costs
  • Increased costs due to cloud management
  • Security challenges

There is no doubt that container technology coupled with a multi-cloud strategy is the future (and present) of enterprise computing. That said, there is an inherent effort around setting up and managing container environments. All the aspects associated with running applications in production must be considered – such as compute and storage needs, security, logging, and monitoring.

Enter the Kubernetes (K8S) Revolution - Classified as a container-orchestration system, Kubernetes is used for automating computer application deployment, scaling, and management.

One way to view Kubernetes is like some type of distributed datacenter or massive clustered OS. A cluster in this context is nothing more than a collection of computers running in a data center, cloud, virtual machines, or a combination of them all. All “nodes” of the cluster work together and appear as a single entity to the end user. All the resources of the cluster get managed by Kubernetes, so as a developer you do not have to worry about CPU, memory, storage, network connectivity etc., as Kubernetes takes care of that for you.

apis at scale with kubernetes

The IDG survey found that 54% of organizations are turning to Kubernetes to help them achieve the benefits of containers.

Back to top

The Platform Matters

You could decide to build your own Kubernetes-based platform seeing that the technology is open source and freely available, however most companies are turning to enterprise-ready Kubernetes platforms as they enhance the base functionality of K8S and have added features to streamline operations, scaling requirements, integration, security, etc.

There are a multitude of enterprise-ready K8S platforms on the market today and all of them offer some unique features over the others. It is important to find a platform that eases the management of complex environments so that developers can focus on building applications without concerning themselves with the underlying infrastructure. It is the whole point of using containers.

Some of the popular options available today include:

Back to top

Running Your APIs at Scale

In modern architectures, a vast number of services are exposed over the network in the form of APIs. The APIs get delivered via distributed systems running on multiple servers and configurations in various locations, all coordinating their actions via network communication. We are no longer provisioning VMs for these tasks as horizontal scalability becomes too difficult and time consuming. Containers allow us to scale horizontally very easily when needed, allowing organizations to be more agile.

With APIs forming the backbone of most modern applications, they simply cannot afford to be unavailable. APIs power all the applications we use daily like Facebook, Twitter/X, Instagram, Uber, Airbnb, etc. These applications are used by billions of people around the word 24 hours a day and need to always be available. The only way to achieve this is to have a resilient distributed system that is available in multiple geographic locations and can scale as demand increases or decreases.

Kubernetes supplies the relevant services to achieve all of this for your containerized application.

Back to top

Conclusion

There are many ways of scaling your workloads based on your use cases. Kubernetes supplies the Horizontal Pod Autoscaler out-of-the-box. A Horizontal Pod Autoscaler broadly makes use of three categories of metrics out of the box:

  • Resource Metrics — These are metrics such as CPU and memory usage.
  • Pod Metrics — These are metrics specific to the pod, such as network usage and traffic (like packets per second).
  • Object Metrics — These are metrics of particular types of objects. For example, if you make use of Ingress, you can make use of requests per second to scale your containers.
apis and kubernetes
Source: Medium

Although the Horizontal Pod Autoscaler is great, it does not track and scale metrics that necessarily affect the customer experience. Customers do not care about CPU utilization, memory usage, etc. Customers using the application, which in turn depends on the underlying API (Application Programming Interface), care more about response times and availability. Fortunately, Kubernetes provides a way to use external metrics to scale your containers.

Scaling your containers based on customer experience metrics would surely help your business go the extra mile and delight your customers!

Get Started With Akana

See how easy it is to create and scale your API portfolio with Akana - sign up to find out if you qualify for a free 6-month trial.

Try Free

 

☁️Become an Expert

Explore additional resources:

Back to top