There is a learning curve associated with scheduling containers in production on Kubernetes. But if your cloud architect or platform engineering team can integrate and configure the set of tools needed for deploying and managing containerized workloads, then Developers, Application Ops, and Cluster Ops teams can move up the curve and accelerate time to value for your business.
But before we discuss how one can move up the learning curve, let’s first get a little context on why containers are a forcing function for change.
When virtual machines (VMs) were new, the technology learning curve primarily affected Ops teams who had to learn about the management, snapshotting, and migration of this new abstraction. The primary unit of management was the VM, not the physical server. Development practices didn’t change much in order to get the most value out of the technology.
But the transition to containerized workloads, as well as the deployment of containers in production on Kubernetes, have a big impact on both Dev and Ops. Containers change how applications are architected and written, and also change how applications are managed, monitored, and supported in production.
So, now both Dev and Ops have a new technology learning curve.
Rewriting your application for containers is not as simple as taking an application running on a physical machine or a virtual machine and just packaging it in a container. Rather, developers need to do some things differently with containers. This includes:
◈ Packaging – consider packaging with build instructions stacked in layers.
◈ Service discovery and catalog – since you’ll be running services across multiple containers, find and bind to dependent services.
◈ Key management – managing authentication and rotate keys across services.
◈ Logging and monitoring – obtaining data about the application, container, and node levels, as well as dependent services.
Ops arguably has an even bigger learning curve. Virtual Machines introduced a lot of dynamic things within an infrastructure environment, but containers make an order of magnitude worse because now you can spin up workloads within milliseconds, kill it, and start it somewhere else. This learning curve involves:
◈ Compute capacity planning – planning CPU and memory at a different level that includes multiple containers on a single machine (physical or virtual).
◈ Networking –. managing networking within and between Kubernetes clusters, especially since more containers means more east-west network traffic. These clusters may include both private infrastructure and public cloud.
◈ Persistent Storage – requires higher service levels than ephemeral applications.
◈ Logging and monitoring – there are usually more containers than an equivalent number of VMs for a monolithic application. And they may move if Kubernetes health check replicates pods on a different host.
◈ Data – a VM or virtual disk might include 10 different containers, so you need to shift thinking about snapshotting data at a container level.
◈ Namespaces – coding to namespaces as a variable, or mapping and managing namespaces across your various environments.
◈ Tracking changes – IT remaining responsible for Service Levels and tracking the history of simple changes. This is especially crucial with the ephemeral nature of containers and Kubernetes’ replication across nodes.
Overall, many of the Dev and Ops tools and processes that were optimized for VMs, now need to be updated and re-optimized for containers.
To make this work, you will likely need someone assigned to an AppOps role that works in production to support containerized applications. Also, you’ll likely need a specialized IT Ops role, call it ClusterOps, to manage Kubernetes and field requests for namespace resources or cluster lifecycle management.
But if you run Kubernetes on premises as part of a hybrid cloud solution, you’ll also need a cloud archteicture or platform engineering role to build the Kubernetes tool stack and connect to and secure the cloud. They can deploy and integrate all the tools needed to lifecycle manage your Kubernetes clusters, including underlying compute, network, and storage on private infrastructure.
Building and maintaining this type of integrated on and off-premises solution takes work — both upfront systems integration and configuration work, and the ongoing management and testing of individual tools and platforms through their upgrade cycles.
If all the tools you need to deploy containerized workloads on premises or in the cloud are integrated and tested working together such as the Cisco Hybrid Cloud Platform for Google Cloud – then it will be faster and easier for developers, as well as Application Ops and Cluster Ops counterparts, to move up the learning curve and accelerate time to value for your business.
Historical Context
When virtual machines (VMs) were new, the technology learning curve primarily affected Ops teams who had to learn about the management, snapshotting, and migration of this new abstraction. The primary unit of management was the VM, not the physical server. Development practices didn’t change much in order to get the most value out of the technology.
But the transition to containerized workloads, as well as the deployment of containers in production on Kubernetes, have a big impact on both Dev and Ops. Containers change how applications are architected and written, and also change how applications are managed, monitored, and supported in production.
So, now both Dev and Ops have a new technology learning curve.
Developers Learning Curve
Rewriting your application for containers is not as simple as taking an application running on a physical machine or a virtual machine and just packaging it in a container. Rather, developers need to do some things differently with containers. This includes:
◈ Packaging – consider packaging with build instructions stacked in layers.
◈ Service discovery and catalog – since you’ll be running services across multiple containers, find and bind to dependent services.
◈ Key management – managing authentication and rotate keys across services.
◈ Logging and monitoring – obtaining data about the application, container, and node levels, as well as dependent services.
Ops Learning Curve
Ops arguably has an even bigger learning curve. Virtual Machines introduced a lot of dynamic things within an infrastructure environment, but containers make an order of magnitude worse because now you can spin up workloads within milliseconds, kill it, and start it somewhere else. This learning curve involves:
◈ Compute capacity planning – planning CPU and memory at a different level that includes multiple containers on a single machine (physical or virtual).
◈ Networking –. managing networking within and between Kubernetes clusters, especially since more containers means more east-west network traffic. These clusters may include both private infrastructure and public cloud.
◈ Persistent Storage – requires higher service levels than ephemeral applications.
◈ Logging and monitoring – there are usually more containers than an equivalent number of VMs for a monolithic application. And they may move if Kubernetes health check replicates pods on a different host.
◈ Data – a VM or virtual disk might include 10 different containers, so you need to shift thinking about snapshotting data at a container level.
◈ Namespaces – coding to namespaces as a variable, or mapping and managing namespaces across your various environments.
◈ Tracking changes – IT remaining responsible for Service Levels and tracking the history of simple changes. This is especially crucial with the ephemeral nature of containers and Kubernetes’ replication across nodes.
Overall, many of the Dev and Ops tools and processes that were optimized for VMs, now need to be updated and re-optimized for containers.
Don’t Get Bogged Down in the Stack
To make this work, you will likely need someone assigned to an AppOps role that works in production to support containerized applications. Also, you’ll likely need a specialized IT Ops role, call it ClusterOps, to manage Kubernetes and field requests for namespace resources or cluster lifecycle management.
Building and maintaining this type of integrated on and off-premises solution takes work — both upfront systems integration and configuration work, and the ongoing management and testing of individual tools and platforms through their upgrade cycles.
If all the tools you need to deploy containerized workloads on premises or in the cloud are integrated and tested working together such as the Cisco Hybrid Cloud Platform for Google Cloud – then it will be faster and easier for developers, as well as Application Ops and Cluster Ops counterparts, to move up the learning curve and accelerate time to value for your business.