Submit support requests and browse self-service resources.
Juan-Claude de Villiers
To say that containers caused a hype in the software industry is the understatement of the century. Containers completely revolutionized the way we design, develop, package, distribute, run, and scale applications and APIs in the modern digital enterprise.
In this post, I will try to tame the revolution by unpacking what containers are all about, why they are important to us, and the ecosystem that supports the cause.
Applications and APIs form the cornerstone of the modern digital enterprise. If they break, business stops, revenue takes a hit, and most importantly customer satisfaction is affected negatively.
These core components need to be hosted on infrastructure in the form of a server. Due to the fundamental concepts of server architecture and operating systems it was standard practice to have dedicated servers for each mission-critical business application or API. The reason for this was around the inability to effectively isolate resources for multiple components on a single physical server.
As a result, the typical flow of events went something like this -- every time business needed a new application, IT (Information Technology) went out and bought a new server. With no one really knowing what the specifications of the hardware should be, IT played it safe and bought a high-end machine as they did not want to be the cause of this new application performing sub-standard on underpowered hardware.
This led to many over-powered servers running at less than 10% of their potential capacity in many cases, a tragic waste of environmental resources, datacenter space, operational resources, and money.
During the late 90’s and early 2000’s there was a momentous shift in the way we obtain and use IT infrastructure. VMWare was one of the first commercially successful organizations to virtualize the x86 architecture and now finally we had a secure and isolated way of running multiple business applications on the same underlying hardware.
IT departments no longer had to buy dedicated servers for each new business application, they now had the ability to better use spare capacity on existing hardware and therefore extracting the maximum value from the hardware investment.
Even though virtualization was a game changer, it had its fair share of challenges, too. The fact that each virtual machine had to have its own operating system, CPU, RAM, and disk space allocation meant that valuable system resources got wasted due to the footprint of each virtual machine. Besides the wasted resources and slow startup time of VM’s, each operating system required licensing, operational maintenance, and monitoring. Moving workloads between hypervisors or even across multiple cloud platforms are more challenging than it should be in many cases.
A new revolution was desperately needed . . . Hello, Containers!!!
Containers are not a new concept at all. For many years now the big cloud behemoths like Google, Facebook, and Amazon have been making use of container technologies to circumvent some of the challenges the VM model poses.
In 2013 a new hero was born to lead the container revolution. Docker managed to package the technology in a way that was easy to use and straightforward, making the technology useful and ready for the masses. Although there are multiple container runtimes available in the market today, it is fair to say that Docker is being touted as the industry standard and has the backing of multiple big hitters like Microsoft and Red Hat, to name a few.
Just as shipping containers allow goods to be transported by ship, rail, or road regardless of the cargo inside, software containers act as a standard unit of software deployment that can contain different code, libraries, and dependencies. Containerizing software enables developers and IT professionals to deploy them across environments and cloud platforms with little or no modification.
In the container model, common infrastructure components like CPU, RAM and disk space are shared across multiple business applications running on the same host. This frees up the host machine’s resources, potentially saves on OS licenses and operationally it makes patching and monitoring a lot easier.
Besides being super-fast to start-up and ultra-portable, moving a container and its workload between your laptop, VM, bare metal server or even multiple cloud providers are a breeze compared to the alternative.
Containers are front and central in the modern IT landscape and are intrinsically intertwined with concepts like cloud-native, multi-cloud, microservices, and services meshes.
Like with most things in the world of technology and innovation there are always the case of new challenges that arise, and containers are no different.
As the adoption of containers grew, organizations quickly realized that containers started surfacing everywhere and soon it became extremely difficult to manage, monitor, scale and orchestrate certain functions across many containers. The net result was the equivalent of digital anarchy . . . the dominoes started tumbling and chaos erupted soon after.
Kubernetes (κυβερνήτης, Greek for "helmsman" or "pilot" or "governor") was originally developed by Google and was handed over to the community in the form of an open source project in 2015. Today the project is governed by the Cloud Native Computing Foundation (CNCF), an agency of the Linux Foundation.
Classified as a container-orchestration system, Kubernetes is used for automating computer application deployment, scaling, and management.
The purpose of Kubernetes might not be immediately obvious to anyone who still thinks in terms of physical datacenters with rows of servers each hosting business applications that rely on the operating system as the foundation that everything gets built on.
One way to view Kubernetes is like some type of distributed datacenter or massive clustered OS. A cluster in this context is nothing more than a collection of computers running in a data center, cloud, virtual machines, or a combination of the aforementioned. All “nodes” of the cluster work together and appear as a single entity to the end user. All the resources of the cluster get managed by Kubernetes, so as a developer you do not have to worry about CPU, memory, storage, network connectivity etc., as Kubernetes takes care of that for you.
Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.
Going through the official Kubernetes documentation is nothing short of overwhelming, to say the least, so let us highlight some of the key concepts and terminology.
A container image represents binary data that encapsulates an application and all its software dependencies (Think in terms of a blueprint). Container images are executable software bundles that can run standalone and that make very well-defined assumptions about their runtime environment.
Each container that you run is repeatable; the standardization from having dependencies included means that you get the same behavior wherever you run it. Containers decouple applications from underlying host infrastructure. This makes deployment easier in different cloud or OS environments. Containers represent a runtime instance of a container image.
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run Pods.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers.
A Deployment is a new way to handle High Availability (HA) in Kubernetes in place of the Replication Controller. A pod by itself is “mortal” but with a Deployment, Kubernetes can make sure that the number of Pods that a user specifies is always up and running in the system. A Deployment specifies how many instances of a pod will run. A YAML file is used to define a Deployment.
An abstract way to expose an application running on a set of Pods as a network service. With Kubernetes you do not need to change your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
The kubelet is the primary “node agent” that runs on each node.
The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node.
The above shows a simplified architecture view of the most used components we typically talk about, but this just scratches the surface. Now that we have a basic understanding of containers and Kubernetes, let us look at the main steps involved in getting your application to run on Kubernetes (K8S)
Working with Kubernetes consists of five main steps:
Wrapping your head around Kubernetes requires understanding of many abstract concepts, lots of reading, and most importantly to try it out for yourself. The best way to dive in is to get your hands dirty and play around with this revolutionary technology stack that is taking the world by storm. I would highly recommend having a look at the official tutorials on the Kubernetes website.
Another useful diagram to study is the Kubernetes Resource Map below. It provides a good overview of the different components and how they relate to each other.
Most leading enterprises today are moving to the cloud, or planning to move to the cloud imminently, to gain increased efficacy, flexibility, and agility, while simplifying and reducing the complexity and cost of their IT and business infrastructure.
This digital pilgrimage to the cloud has accelerated the need for automation across the enterprise, cloud-neutral solutions that are portable across multiple cloud providers, scalable resources, and security to mention a few.
Let us look at three of the main considerations IT Leaders need to evaluate when examining Kubernetes as an enterprise platform:
So, with all these benefits, why would any organization not adopt an enterprise Kubernetes platform? The reality is there are a lot of complexities to it. It is very capable, but also overly complex by nature.
Kubernetes and containers have changed how companies do business today. Software applications and APIs have become the cornerstone of many enterprises and a key enabler in digital transformation strategies. The reliance on IT has shifted from being vertical to horizontal across the enterprise. Departments are all building assets that contribute to the digital value chain and need a flexible, scalable, secure, and vendor-agnostic platform to be successful in their quest for market supremacy.
Whoever adapts and evolves faster, will have a competitive edge.
In the next post, we will be discussing “How to Run Your Enterprise APIs at Scale Using Kubernetes.”
See why Akana is a Strong Performer in The Forrester Wave™: API Management Solutions, Q3 2020 report.Download Report
See why Akana is a Strong Performer in The Forrester Wave™: API Management Solutions, Q3 2020 report.
Explore additional resources:
API Management Consultant
Juan-Claude has over 15 years of professional experience in the IT industry, having held senior-level positions across sales, sales engineering, consulting, and software development. He currently works as a Consultant with Akana, specializing in APIs, API Security, API Management, Cloud Integration, and Analytics.