Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Tuesday 14 May 2024

Optimizing business velocity with Cisco Full-Stack Observability

Optimizing business velocity with Cisco Full-Stack Observability

Fueling digital transformation success with cost and resource optimization over applications, workloads, and components

Digital transformation comes with an irony that is not lost on the IT teams. Applications and the digital experiences they enable require cloud-based resources for which costs can easily spiral out of control. Worse, lack of visibility means that utilization of these resources can be difficult to accurately assess.

This creates a conundrum. Fast, reliable application performance depends on sufficient allocation of cloud resources to support demand, even when usage spikes. Under-resourcing in this area can cause significant performance challenges that result in very user experience. With this in mind, teams responsible for migrating workloads to the cloud or spinning up resources for new applications can often over-provision cloud resources to be on the safe side.

The more complexity that is introduced by sprawling suites of tools, containers, application programming interfaces (APIs), and serverless components, the more ways there are to incur costs. And the more ways there are to fall short of efficiency goals  as cloud resources sit idle.

As a result, technologists are under pressure to find out where costs are out of alignment and whether resources have been allocated in ways that support the business.

Taking the guesswork out of optimization


Cisco Full-Stack Observability allows operational teams to gain a broad understanding of system behavior, performance, and security threats across the entire application estate. It also equips them to understand and optimize cloud resource utilization. This optimization helps organizations lower costs by properly modulating asset utilization across workloads, paying only for what they need through right-sizing resource allocation.

It offers optimization capabilities for resolving poorly aligned cloud spend with actionable insights into hybrid costs and application resources within their established monitoring practices. While over-provisioning to avoid downtime is wasteful from both a budgetary and sustainability perspective, under-allocation presents a serious risk.

When applications are constrained by insufficient resources, the resulting poor application performance or even downtime can damage organizational reputation and revenues. With Cisco Full-Stack Observability, teams can scale up or down to ensure resources sufficiently support workloads.

Moreover, Cisco Full-Stack Observability solutions provide visibility into application-level costs alongside performance metrics down to the pod level. It helps perform granular cost analysis of Kubernetes resources, allowing FinOps and CloudOps teams to understand the composition of their cloud spend as well as the cost of resources that are idle. Armed with granular cost insights, organizations can mitigate overspending on unused resources while ensuring that critical applications have adequate resources.

Driving optimization with AI and ML


Artificial intelligence (AI) is driving change in observability practices to improve both operational and business outcomes. Cisco Full-Stack Observability combines telemetry and business context so that AI and machine learning (ML) analytics can be uniformly applied. This allows IT Operations teams to extend their value and truly be strategic enablers for their business.

For example, application resource optimization with Cisco Full-Stack Observability takes aim at inefficiencies in Kubernetes workload resource utilization. By running continuous AI and ML experiments on workloads, it creates a utilization baseline, analyzing and identifying ways to optimize resource utilization. The resulting recommendations for improvement help to maximize resource usage and reduce excessive cloud spending.

Cisco Full-Stack Observability offers capabilities, moreover, to identify potential security vulnerabilities related to the application stack and optimize the stack against these threats. It continuously monitors for vulnerabilities within applications, business transactions, and libraries with the ability to find and block exploits automatically. The result is real-time optimization without constant manual intervention.

To understand and better manage the impact of risks on the business, Cisco security solutions use ML and data science to automate risk management at multiple layers. First, code dependencies, configuration-level security vulnerabilities, and leakage of sensitive data are continually assessed. Second, business priorities are established through a measurement of risk probability and business impact.

This comprehensive approach to optimization makes Cisco Full-Stack Observability a powerful solution for modern, digital-first organizations.

Source: cisco.com

Wednesday 6 September 2023

Taming AI Frontiers with Cisco Full-Stack Observability Platform

Cisco Full-Stack Observability Platform, Cisco Certification, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation

The Generative AI Revolution: A Rapidly Changing Landscape


The public unveiling of ChatGPT has changed the game, introducing a myriad of applications for Generative AI, from content creation to natural language understanding. This advancement has put immense pressure on enterprises to innovate faster than ever, pushing them out of their comfort zones and into uncharted technological waters. The sudden boom in Generative AI technology has not only increased competition but has also fast-tracked the pace of change. As powerful as it is, Generative AI is often provided by specific vendors and frequently requires specialized hardware, creating challenges for both IT departments and application developers.

It is not a unique situation with technology breakthroughs, but the scale and potential for disruption in all areas of business is truly unprecedented. With proof-of-concept projects easier than ever to demonstrate potential with ChatGPT prompt-engineering, the demand for building new technologies using Generative AI was unprecedented. Companies are still walking a tight rope, balancing between safety of compromising their intellectual properties and confidential data and urge to move fast and leverage the latest Large Language Models to stay competitive.

Kubernetes Observability


Kubernetes has become a cornerstone in the modern cloud infrastructure, particularly for its capabilities in container orchestration. It offers powerful tools for the automated deployment, scaling, and management of application containers. But with the increasing complexity in containers and services, the need for robust observability and performance monitoring tools becomes paramount. Cisco’s Cloud Native Application Observability Kubernetes and App Service Monitoring tool offers a solution, providing comprehensive visibility into Kubernetes infrastructure.

Many enterprises have already adopted Kubernetes as a major way to run their applications and products both for on-premise and in the cloud. When it comes to deploying Generative AI applications or Large Language Models (LLMs), however, one must ask: Is Kubernetes the go-to platform? While Cloud Native Application Observability provides an efficient way to gather data from all major Kubernetes deployments, there’s a hitch. Large Language Models have “large” in the name for a reason. They are massive, compute resource-intensive systems. Generative AI applications often require specialized hardware, GPUs, and big amounts of memory for functioning—resources that are not always readily available in Kubernetes environments, or the models are not available in every place.

Infrastructure Cloudscape


Generative AI applications frequently push enterprises to explore multiple cloud platforms such as AWS, GCP, and Azure, rather than sticking to a single provider. AWS is probably the most popular cloud provider among enterprise, but Azure’s acquisition of OpenAI and making GPT-4 available as part of their cloud services was ground breaking. With Generative AI it is not uncommon for enterprises to go beyond one cloud, often spanning different services in AWS, GCP, Azure and hosted infrastructure. However, GCP and AWS are expending their toolkits from a standard pre-GPT MLOps world to fully- managed Large Language Models, Vector databases, and other newest concepts. So we will potentially see even more fragmentation in enterprise cloudscapes.

Troubleshooting distributed applications spanning across cloud and networks may be a dreadful task consuming engineering time and resources and affecting businesses. Cisco Cloud Native Application Observability provides correlated full-stack context across domains and data types. It is powered by Cisco FSO Platform, which provide building blocks to make sense of the complex data landscapes with an entity-centric view and ability to normalize and correlate data with your specific domains.

Beyond Clouds


As Generative AI technologies continue to evolve, the requirements to utilize them efficiently are also becoming increasingly complex. As many enterprises learned, getting a project from a very promising prompt-engineered proof of concept to a production-ready scalable service may be a big stretch. Fine-tuning and running inference tasks on these models at scale often necessitate specialized hardware, which is both hard to come by and expensive. The demand for specialized, GPU-heavy hardware, is pushing enterprises to either invest in on-premises solutions or seek API-based Generative AI services. Either way, the deployment models for advanced Generative AI often lie outside the boundaries of traditional, corporate-managed cloud environments.

To address these multifaceted challenges, Cisco FSO Platform emerges as a game-changer, wielding the power of OpenTelemetry (OTel) to cut through the complexity. By providing seamless integrations with OTel APIs, the platform serves as a conduit for data collected not just from cloud native applications but also from any applications instrumented with OTel. Using the OpenTelemetry collector or dedicated SDKs, enterprises can easily forward this intricate data to the platform. What distinguishes the platform is its exceptional capability to not merely accumulate this data but to intelligently correlate it across multiple applications. Whether these applications are scattered across multi-cloud architectures or are concentrated in on-premises setups, Cisco FSO Platform offers a singular, unified lens through which to monitor, manage, and make sense of them all. This ensures that enterprises are not just keeping pace with the Generative AI revolution but are driving it forward with strategic insight and operational excellence.

Bridging the Gaps with Cisco Full-Stack Observability


Amazon Web Services (AWS), Cisco FSO Platform, Kubernetes
Cisco FSO Platform serves as a foundational toolkit to meet your enterprise requirements, regardless of the complex terrains you traverse in the ever-evolving landscape of Generative AI. Whether you deploy LLM models on Azure OpenAI Services, operate your Generative AI API and Authorization services on GCP, build SaaS products on AWS, or run inference and fine-tune tasks in your own data center – the platform enables you to cohesively model and observe all your applications and infrastructure and empowers you to navigate the multifaceted realm of Generative AI with confidence and efficiency.

Cisco FSO Platform extends its utility by offering seamless integrations with multiple partner solutions, each contributing unique domain expertise. But it doesn’t stop there—it also empowers your enterprise to go a step further by customizing the platform to cater to your unique requirements and specific domains. Beyond just Kubernetes, multi-clouds, and Application Performance Monitoring, you gain the flexibility to model your specific data landscape, thereby transforming this platform into a valuable asset for navigating the intricacies and particularities of your Generative AI endeavors.

Source: cisco.com

Friday 28 October 2022

Cisco Announces Open Source Cloud-Native Offerings for Securing Modern Applications

Today at KubeCon + CloudNativeCon North America 2022 in Detroit, Cisco unveiled FunctionClarity, a new open source project which helps developers secure the serverless functions that fundamentally reduce the amount of code necessary to create and deploy cloud-native applications.

Based on SigStore, FunctionClarity lets users sign the code of serverless functions, and authenticate their integrity from a trusted pipeline, when deployed across any cloud environment. It allows both keyless and key pair methods to eliminate exposure of the code at runtime.

The launch of FunctionClarity comes as the use of serverless technologies is growing exponentially. For example, AWS (Amazon Web Services) Lambda functions are now invoked 3.5 times more often compared to just two years ago.

Cisco Career, Cisco Tutorial and Materials, Cisco Career, Cisco Job, Cisco Learning, Cisco Preparation

OpenClarity is a trio of projects


FunctionClarity is the third chapter in the OpenClarity set of open source projects which help solve problems around application security, the software supply chain, and the “Shift Left” movement in software development that fully considers security from the outset.

Chapter 1: At KubeCon North America in 2021, Cisco released APIClarity, an open source API tool for visualizing and identifying potential risks such as API drift, shadow and zombie APIs. It builds and analyzes the OpenAPI specifications for all APIs in your environment.

Chapter 2: In May at KubeCon Europe 2022, we followed with the release of KubeClarity, an open source tool for detection and management of Software Bill of Materials (SBOM) and vulnerabilities of container images and filesystems. It scans both runtime Kubernetes clusters and CI/CD pipelines for enhanced software supply chain security.

Building the Application-First Future


Modern, distributed application software solves real-world business problems. Increasingly, those software assets come from everywhere – internal, cloud, SaaS, open source – run anywhere, and are accessed from anyplace via APIs and service calls.

In this distributed environment, the expanding attack surface for these applications includes APIs and serverless interfaces, vulnerable services, and opaque software assets. It’s no surprise APIs and service endpoints have become preferred threat vectors with the average company experiencing a 95% rate of API security incidents. There has been a 540% increase in the number of API-related security vulnerabilities recorded in the OVE database between 2015 and last year.

Transparency about your software tools and assets, and the security of APIs and interfaces, from development all the way through to production are therefore critical to ensuring you, your customers and end users are protected.

Panoptica brings 360-degree visibility and remediation options to your application attack surfaces in a single, modular application-security solution. As a freemium SaaS service that’s easy to get started and consume, it connects through your application SDL workflows, toolchains, and runtime to help your teams shift everywhere. It lets developers, SREs and security experts seamlessly collaborate within the same environment.

Nikolas Mousorous, DevOps Engineer, Marlow Navigation: “Existing security solutions we had in our environment couldn’t address our transition to modern microservice-based applications. Working with Panoptica, we were able to insert security controls into our complex environment seamlessly for secure application deployment and connectivity.”

Calisti is a complementary solution that provides discoverability, connectivity, SLO, and lifecycle management across all your application services – from greenfield, cloud-native applications to hybrid, traditional, and cloud-based applications. Calisti integrates seamlessly into your cloud operating environments, and allows your SRE, DevOps and cloud platform teams to easily connect, scale and manage the performance of application services across virtual machines (VMs), Kafka instances, and Istio service meshes, across any cloud or on-premises footprint.

Cisco Leading in Open Source


Cisco is taking an increasingly leading role in open source, stepping up contributions and driving the open source movement forward across the enterprise application ecosystem.

We have been a Platinum Member of the Cloud Native Computing Foundation (CNCF) since it was founded, and we have been Diamond Sponsors of KubeCon for every year since its inception. We also serve as members of the steering committee for the Linux Foundation’s TODO Group, we are a Platinum sponsor of Open Source Security Foundation (OpenSSF), LF Networking, LF Public Health, and we are Gold or Premier for Open19, Linux Foundation, and the Bytecode Alliance.

Along with the trio of OpenClarity projects, we have launched, maintain, and contribute to many other cloud-native projects including Dex, Bank Vaults, Istio Operator, K Operator, Logging Operator, Zot, and Network Service Mesh, and we are among the top five contributors to OpenTelemetry.

Calisti and Panoptica are both built on the open source foundation of the above-mentioned projects.

Join Us at KubeCon in Detroit


Come see Cisco at KubeCon + CloudNativeCon North America 2022 this week at the Cisco Solutions Showcase, Booth D3 in Exhibit Hall B, at Huntington Place in Detroit. There you can view a demo of FunctionClarity and learn more about the emerging Security, Observability, and Connectivity solutions Cisco is building. You can also find out about the latest open source projects at Cisco, including how to contribute and collaborate.

At the Cisco booth, you can get your own personalized hoodie, choosing from multiple designs to make an amazing statement, and even watch it get printed. In addition, for every theatre session and demo attendee, Cisco will donate a pair of socks to local Detroit homeless shelters so we can all give back to the community.

Source: cisco.com

Friday 26 August 2022

Service Chaining VNFs with Cloud-Native Containers Using Cisco Kubernetes

Cisco Exam, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco News

To support edge use cases such as distributed IoT ecosystems and data-intensive applications, IT needs to deploy processing closer to where data is generated instead of backhauling data to a cloud or to the campus data center. A hybrid workforce and cloud-native applications are also pushing applications from centralized data centers to the edges of the enterprise. These new generations of application workloads are being distributed across containers and across multiple clouds.

Network Functions Virtualization (NFV) focuses on decoupling individual services—such as Routing, Security, and WAN Acceleration—from the underlying hardware platform. Enabling these Network Functions to run inside virtual machines increases deployment flexibility in the network. NFV enables automation and rapid service deployment of networking functions through service-chaining, providing significant reductions in network OpEx. The capabilities described in this post extend service-chaining of Virtual Network Functions in Cisco Enterprise Network Function Virtualization Infrastructure (NFVIS) to cloud-native applications and containers.

Cisco NFVIS provides software interfaces through built-in Local Portal, Cisco vManage, REST, Netconf APIs, and CLIs. You can learn more about NFVIS at the following resources:

Virtual Network Functions lifecycle management

Secure Tunnel and Sharing of IP with VNFs

Route-Distribution through BGP NFVIS system enables learning routes announced from the remote BGP neighbor and applying the routes to the NFVIS system; as well as announcing or withdrawing NFVIS local routes from the remote BGP neighbor.

Security is embedded from installation through all software layers such as credential management, integrity and tamper protection, session management, and secure device access.

Clustering combines nodes into a single cluster definition.

◉ Third-party VNFs are supported through the Cisco VNF Certification Program.

Cisco Exam, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco News
Figure 1: Capabilities of Cisco NFVIS

Virtualizing network functions sets the stage for managing container-based applications using Kubernetes (k8s). Cisco NFVIS enables service chaining for cloud-native containerized applications for edge-compute deployments to provide secure communication from data center to cloud to edge.

Integrate Cloud-Native Applications with Cisco Kubernetes


Cisco’s goal is to make it easy for both NetOps and DevOps to work together using the same dashboard to perform the entire process of registering, deploying, updating, monitoring VMs, and provision service chains with the easy-to-use Cisco Enterprise NFVIS Portal or Cisco vManage for SD-WAN. NetOps persona can perform each step of the VNF lifecycle management to deploy VNF-based service chains.

Cisco NFVIS now includes Cisco Kubernetes to provide centralized orchestration and management of containers. Cisco Kubernetes is available to download through Cisco’s NFVIS Software site. The current release supports the deployment of Cisco Kubernetes through NFVIS Local Portal and NFVIS APIs using existing NFVIS Lifecycle Management Workflows.

Cisco Kubernetes has a built-in Kubernetes Dashboard, enabling NetOps and DevOps Admins to use standard Kubernetes workflows to deploy and manage networking and application VMs. NetOps Admins acquire access tokens in NFVIS via the built-in GUI Local Portal or NFVIS CLI to access a Kubernetes Dashboard running inside Cisco Kubernetes. NetOps personas can execute their role in establishing VNFs and then hand off administration tokens to DevOps personas to access the Kubernetes Dashboard within Cisco Kubernetes. DevOps uses the dashboard to instantiate and manage their application containers. VNFs can be service chained with applications inside Cisco Kubernetes via an ingress controller that is deployed as part of a Kubernetes cluster to provide load balancing and ingress controls.

Figure 2: Kubernetes Dashboard inside Cisco Kubernetes

Cisco Kubernetes supports two deployment topologies:

◉ Single node is enabled in the current NFVIS 4.9.1 release.
◉ In future releases, multi-node topologies will enable capabilities such as high availability..

Figure 3: Cisco NFVIS Application Hosting Workflow

Collaborative Tools to Simplify Cloud Native Container Applications


Ops team collaboration is made possible by Cisco Enterprise NFVIS and Cisco Kubernetes to power tomorrow’s applications across clouds and edge use cases. Deploying service-chained VNFs has enabled NetOps to simplify support for distributed offices, devices, and applications. Now Cisco Kubernetes in Cisco Enterprise NFVIS provides DevOps with a familiar set of k8s workflows to deploy containerized applications from on-premises to cloud to edge, taking full advantage of the service-chained VNFs managed by NetOps.

Source: cisco.com

Tuesday 17 May 2022

Network Service Mesh Simplifies Multi-Cloud / Hybrid Cloud Communication

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

Kubernetes networking is, for the most part, intra-cluster. It enables communication between pods within a single cluster:

The most fundamental service Kubernetes networking provides is a flat L3 domain: Every pod can reach every other pod via IP, without NAT (Network Address Translation).

The flat L3 domain is the building block upon which more sophisticated communication services, like Service Mesh, are built:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials
Application Service Mesh architecture.

Fundamental to a service mesh’s capability to function is that the service mesh control plane can reach each of the proxies over a flat L3, and each of the proxies can reach each other over a flat L3.

This all “just works” within a single Kubernetes cluster, precisely because of the flat L3-ness of Kubernetes intra-cluster networking.

Multi-cluster communication


But what if you need workloads running in more than one cluster to communicate?

If you are lucky, all of your clusters share a common, flat L3. This may be true in an on-prem situation, but often is not. It will almost never be true in a multi-cloud/hybrid cloud situation.

Often the solution proposed involves maintaining a complicated set of L7 gateway servers:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

This architecture introduces a great deal of administrative complexity. The servers have to be federated together, connectivity between them must be established and maintained, and L7 static routes have to be kept up. As the number of clusters increases, this becomes increasingly challenging.

What if we could get a set of workloads, no matter where they are running, to share a common flat L3 domain:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

The green pods could reach each other over a flat L3 Domain.

The red pods could reach each other over a flat L3 Domain.

The red and green pod could reach both the green pods and the red pods in the green (and red respectively) flat L3 Domains.

This points the way to a solution to the problem of stretching a single service mesh with a single control plane across workloads running in different clusters/clouds/premises, etc.:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

An instance of Istio could be run over the red vL3, and a separate Istio instance could be run over the green vL3.

Then the red pods are able to access the red Istio instance.

The green pods are able to access the green Istio instance.

The red/green pod can access both the red and the green Istio instances.

The same could be done with the service mesh of your choice (such as Linkerd, Consul, or Kuma).

Network Service Mesh benefits


Network Service Mesh itself does not provide traditional L7 Services. It provides the complementary service of flat L3 domain that individual workloads can connect to so that the traditional service mesh can do what it does *better* and more *easily* across a broader span.

Network Service Mesh also enables other beneficial and interesting patterns. It allows for multi-service mesh, the capability for a single pod to connect to more than one service mesh simultaneously.

And it allows for “multi-corp extra-net:” it is sometimes desirable for applications from multiple companies to communicate with one another on a common service mesh. Network Service Mesh has sophisticated identity federation and admissions policy features that enable one company to selectively admit the workloads from another into its service mesh.

Source: cisco.com

Tuesday 26 April 2022

How To Do DevSecOps for Kubernetes

In this article, we’ll provide an overview of security concerns related to Kubernetes, looking at the built-in security capabilities that Kubernetes brings to the table.

Kubernetes at the center of cloud-native software

Since Docker popularized containers, most non-legacy large-scale systems use containers as their unit of deployment, in both the cloud and private data centers. When dealing with more than a few containers, you need an orchestration platform for them. For now, Kubernetes is winning the container orchestration wars. Kubernetes runs anywhere and on any device—cloud, bare metal, edge, locally on your laptop or Raspberry Pi. Kubernetes boasts a huge and thriving community and ecosystem. If you’re responsible for managing systems with lots of containers, you’re probably using Kubernetes.

The Kubernetes security model

When running an application on Kubernetes, you need to ensure your environment is secure. The Kubernetes security model embraces a defense in depth approach and is structured in four layers, known as the 4Cs of Cloud-Native Security:

Read More: 350-801: Implementing Cisco Collaboration Core Technologies (CLCOR)

1. Cloud (or co-located servers or the corporate datacenter)

2. Container

3. Cluster

4. Code

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Security at outer layers establishes a base for protecting inner layers. The Kubernetes documentation reminds us that “You cannot safeguard against poor security standards in the base layers by addressing security at the Code level.”

At the Cloud layer, security best practices are expected of cloud providers and their infrastructure. Working inward to the Cluster layer, cluster components need to be properly secured, as do applications running in the cluster.

At the Container level, security involves vulnerability scanning and image signing, as well as establishing proper container user permissions.

Finally, at the innermost layer, application code needs to be designed and built with security in mind. This is true whether the application runs in Kubernetes or not.

In addition to the 4 C’s, there are the 3 A’s: authentication, authorization, and admission. These measures apply at the Cluster layer. Secure systems provide resource access to authenticated entities that are authorized to perform certain actions.

Authentication


Kubernetes supports two types of entities: users (human users) and service accounts (machine users, software agents). Entities can authenticate against the API server in various ways that fit different use cases:

◉ X509 client certificates
◉ Static tokens
◉ Bearer tokens
◉ Bootstrap tokens
◉ Service account tokens
◉ OpenID Connect tokens

You can even extend the authentication process with custom workflows via webhook authentication.

Authorization


Once a request is authenticated, it goes through an authorization workflow which decides if the request should be granted.

The main authorization mechanism is role-based access control (RBAC). Each authenticated request has an HTTP verb like GET, POST, or DELETE, and authenticated entities have a role that allows or denies the request. Other authorization mechanisms include attribute-based access control (ABAC), node authorization, and webhook mode.

Admission


Admission control is a security measure that sets Kubernetes apart from other systems. When a request is authorized, it still needs to go through another set of filters. For example, an authorized request may be rejected by an admission controller due to quotas or due to other requests at a higher priority. In addition to validation, admission webhooks can also mutate incoming requests as a way of processing request objects for use before reaching the Kubernetes API server.

In the context of security, pod security admission might add an audit notation or prevent the scheduling of a pod.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Secrets management


Secrets are an important part of secure systems. Kubernetes provides a full-fledged abstraction and robust implementation for secrets management. Secrets are stored in etcd—Kubernetes’ state store—which can store credentials, tokens, SSH keys, and any other sensitive data. It is recommended to store small, sensitive data only as Kubernetes Secrets.

Data encryption


When you want to store a large amount of data, consider using dedicated data stores like relational databases, graph databases, persistent queues, and key-value stores. From the vantage point of security, It’s important to keep your data encrypted both at rest (when it is simply sitting in storage) as well as in transit (when it is sent across the wire). While data encryption is not unique to Kubernetes, the concept must be applied when configuring storage volumes for Kubernetes.

Encryption at rest


There are two approaches to encryption at rest. The first approach uses a data store that encrypts the data for you transparently. The other approach makes the application responsible for encryption, then storing the already-encrypted data in any data store.

Encryption in transit


Eventually, you’ll need to send your data for processing. Because the data is often (necessarily) decrypted at this point, it should be sent over a secure channel. Using  HTTPS, STCP, or SFTP for secure transit of data is best practice.

Kubernetes services can be configured with specific ports like 443 for HTTPS.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Managing container images securely


Kubernetes orchestrates your containers. These containers are deployed as images. Many Kubernetes-based systems take advantage of third-party images from the rich Kubernetes ecosystem. If an image contains vulnerabilities, your system is at risk.

There are two primary measures to safeguard your system. First, use trusted image registries, such as Google Container Registry, AWS Elastic Container Registry, or Azure Container Registry. Alternatively, you may run your own image registry using an open-source project like Harbor and curate exactly which trusted images you allow.

The other measure is to frequently scan images for vulnerabilities as part of the CI/CD process.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Defining security policies


Kubernetes and its ecosystem provide several ways to define security policies to protect your systems. Note that the built-in Kubernetes PodSecurityPolicy resource is deprecated and will be removed in Kubernetes 1.25. At the time of this writing, the Kubernetes community is working on a lightweight replacement. However, the current recommendation is to use a robust third-party project—for example, Gatekeeper, Kyverno, or K-Rail—as a policy controller.

Policies can be used for auditing purposes, to reject pod creation, or to mutate the pod and limit what it can do. By default, pods can receive traffic from any source and send traffic to any destination. Network policies allow you to define the ingress and egress of your pods. The network policy typically translates to firewall rules.

Resource quotas are another type of policy, and they’re particularly useful when multiple teams share the same cluster using different namespaces. You can define a resource quota per namespace and ensure that teams don’t try to provision too many resources. This is also important for security purposes, such as if an attacker gains access to a namespace and tries to provision resources (to perform crypto mining, for example).

Monitoring, alerting, and auditing


We have mostly discussed preventative measures thus far. However, a crucial part of security operations is detecting and responding to security issues. Unusual activity could be a sign that an attack is in progress or that a service is experiencing degraded performance. Note that security issues often overlap with operational issues. For example, an attacker downloading large amounts of sensitive data can cause other legitimate queries to time out or be throttled.

You should monitor your system using standard observability mechanisms like logging, metrics, and tracing. Kubernetes provides built-in logging and metrics for its own components. Once a serious problem is discovered, alerts should be raised to the relevant stakeholders. Prometheus can provide metrics monitoring and alerting, while Grafana provides dashboards and visualizations for those metrics. These tools, along with AppDynamics or countless others, can serve as effective Kubernetes monitoring solutions.

When investigating an incident, you can use the Kubernetes audit logs to check who performed what action at a particular time.

Source: cisco.com

Tuesday 29 March 2022

Hyperconverged Infrastructure with Harvester: The start of the Journey

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills

Deploying and running data center infrastructure management – compute, networking, and storage – has traditionally been manual, slow, and arduous. Data center staffers are accustomed to doing a lot of command line configuration and spending hours in front of data center terminals. Hyperconverged Infrastructure (HCI) is the way out: It solves the problem of running storage, networking, and compute in a straightforward way by combining the provisioning and management of these resources into one package, and it uses software defined data center technologies to drive automation of these resources. At least in theory.

Recently, a colleague and I have been experimenting with Harvester, an open source project to build a cloud native, Kubernetes-based Hyperconverged Infrastructure tool for running data center and edge compute workloads on bare metal servers.

Harvester brings a modern approach to legacy infrastructure by running all data center and edge compute infrastructure, virtual machines, networking, and storage, on top of Kubernetes. It is designed to run containers and virtual machine workloads side-by-side in a data center, and to lower the total cost of data center and edge infrastructure management.

Why we need hyperconverged infrastructure

Many IT professionals know about HCI concepts from using products from VMWare, or by employing cloud infrastructure like AWS, Azure, and GCP to manage Virtual Machine applications, networking, and storage. The cloud providers have made HCI flexible by giving us APIs to manage these resources with less day-to-day effort, at least once the programming is done. And, of course, cloud providers handle all the hardware – we don’t need to stand up our own hardware in a physical location.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills
Multi-node Harvester cluster

However, most of the current products that support converged infrastructure tend to lock customers to using their company’s own technology, and they also usually come with licensing fees. Now, there is nothing wrong with paying for a technology when it helps you solve your problem. But single-vendor solutions can wall you off from knowing exactly how these technologies work, limiting your flexibility to innovate or react to issues.

If you could use a technology that combines with other technologies you are already required to know today – like Kubernetes, Linux, containers, and cloud native – then you could theoretically eliminate some of the headaches of managing edge compute / data centers, while also lowering costs.

This is what the people building Harvester are attempting to do.

Adapting to the speed of change


Cloud providers have made it easier to deploy and manage the infrastructure surrounding applications. But this has come at the expense of control, and in some cases performance.

HCI, which the cloud providers support and provide, gets us some control back. However, the recent rise of application containers, over virtual machines, changed again how infrastructure is managed and even thought of, by abstracting layers of application packaging, all while making that packaging lighter weight than last-generation VM application packaging. Containers also provide application environments that are  faster to start up, and easier to distribute because of the decreased image sizes. Kubernetes takes container technologies like Docker to the next level by adding in networking, storage, and resource management between containers, in an environment that connects everything together. Kubernetes allows us to integrate microservice applications with automation and speedy deployments.

Kubernetes offers an improvement on HCI technologies and methodologies. It provides a better way for developers to create cloud agnostic applications, and to spin up workloads in containers more quickly than traditional VM applications. Kubernetes did not aim to replace HCI, but it did make a lot of the goals of software deployment and delivery simpler, from an HCI perspective.

In a lot of environments, Kubernetes runs inside VMs. So you still need external HCI technology to manage the underlying infrastructure for the VMs that are running Kubernetes. The problem now is that if you want to run your application in Kubernetes containers on infrastructure you have control of, you have different layers of HCI to support.  Even if you get better application management with Kubernetes, infrastructure management becomes more complex. You could try to use vanilla Kubernetes for every part of your edge-compute / data center stack and run it as your bare metal operating system instead of traditional HCI technologies, but you have to be ok migrating all workloads to containers, and in some cases that is a high hurdle to clear, not to mention the HCI networking that you will need to migrate over to Kubernetes.

The good news is that there are IoT and Edge Compute projects that can help. The Rancher organization, for example is creating a lightweight version of Kubernetes, k3s, for IoT compute resources like the Raspberry Pi and Intel NUC computers. It helps us push Kubernetes onto more bare metal infrastructure. Other orgs, like KubeVirt, have created technologies to run virtual machines inside containers and on top of Kubernetes, which has helped with the speed of deployment for VMs, which then allow us to use Kubernetes for our virtual networking layers and all application workloads (container and VMs). And other technology projects, like Rook and Longhorn, help with persistent storage for HCI through Kubernetes.

If only these could combine into one neat package, we would be in good shape.

Hyperconverged everything


Knowing where we have come from in the world of Hyperconverged Infrastructure for our Data Centers and our applications, we can now move on to what combines all these technologies together. Harvester packages up k3s (light weight Kubernetes), KubeVirt (VMs in containers), and Longhorn (persistent storage) to provide Hyperconverged Infrastructure for bare metal compute using cloud native technologies, and wraps an API / Web GUI bow on it to for convenience and automation.

Source: cisco.com

Monday 5 April 2021

Intersight Kubernetes Service (IKS) Now Available!

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation

We announced the Tech Preview of Intersight Kubernetes Service (IKS) which received tremendous interest. Over 50 internal sales teams, partners and customers participated and provided valuable recommendations and great validation for our offering and strategic direction. Today we are pleased to announce the general availability of IKS!

Read More: SaaS-based Kubernetes lifecycle management: an introduction to Intersight Kubernetes Service

Intersight Kubernetes Service’s goal is to accelerate our customers’ container initiatives by simplifying the management effort for Kubernetes clusters across the full infrastructure stack and expanding the application operations toolkit. IKS provides flexibility and choice of infrastructure (on-prem, multi-hypervisor, bare metal, public cloud) so that our customers can focus on running and monetizing business critical applications in production, without having to worry about the challenges of open-source or figuring out the mechanics to manage, operate and correlate between each layer of the infrastructure stack.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation
With Cisco Intersight it can be easy

For IT admins and infrastructure operators IKS means an easy – almost hands-off – secure deployment and comprehensive lifecycle management of 100% open source Kubernetes (K8s) clusters and add-ons, with full-stack visibility from the on-prem server firmware and management up to the K8s application. Initially, ESXi targets will be supported, with bare metal and public cloud integrations coming soon, along with many other features, such as adopted clusters, multi-cluster and vGPU support.

For DevOps teams IKS is so much more than just a target to deploy K8s-based applications.  As a native service of the Intersight platform, DevOps engineers can now benefit from the recently announced HashiCorp partnership and brand new Intersight Service for HashiCorp Terraform, deploying their applications using Infrastructure as Code (Iac) and Terraform. They can also benefit from the native Intersight Workload Optimizer functionality, which means complete mapping of interdependencies between K8s apps and infrastructure, and AIOps-powered right-sizing (based on historical utilization of resources) and auto-scaling.

Let’s take a look at what IKS enables in a bit more detail:

A common platform for full-stack infrastructure and K8s management


The modern challenges for IT admins and infrastructure teams is navigating a hyper-distributed, extremely diverse IT landscape: hybrid cloud infrastructure with on-premises locations (data centers, edge, co-lo) and multiple clouds, heterogeneous stacks and workload requirements (bare metals, virtual machines, containers, serverless), and the need for speed to cater for internal customers (DevOps, SecOps, other IT and LoB users) and ultimately end-users!

The only way to address this complexity is to simplify with a unified, consistent cloud operating model and real-time automation to balance risk, cost and control. This is where Cisco Intersight comes in. Cisco Intersight is a common platform for intelligent visualization, optimization, and orchestration for applications and infrastructure (including K8s clusters/apps). It enables teams to automate and simplify operations, use full-stack observability to continuously optimize your environment, and work better and faster with DevOps teams for cloud native service delivery.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation
Intersight – The world’s simplest hybrid cloud platform

With IKS and other Intersight services, IT admins can easily build an entire K8s environment from server firmware management, to the hyperconverged layer, to deploying clusters in a few clicks via the GUI or directly using the APIs – and now with Terraform code! In addition, Intersight provides common identity management (SSO, API security), RBAC (two new roles for K8s admins and K8s operators) and multi-tenancy (server/hyperconverged/K8s layers) to support customers looking for a secure, isolated, managed and multi-tenant K8s platform.

IKS regular releases ensures that IT Admins can effortlessly keep K8s versions, add-on versions and security fixes up to date on their clusters. We curate, harden for security and manage essential and optional add-ons (CNI, CSI, L4 and L7 load balancer, K8s dashboard, Kubeflow, monitoring etc) to provide production grade tools to our customers. Those IKS features allow customers to deploy and consume secure, consistent and reliable open-source K8s integrations without becoming CNCF landscape experts, and while maintaining the flexibility to port any other open-source components. Demo video available here.

Continuous Delivery for Kubernetes clusters and apps


IKS supports multiple options to integrate Kubernetes resources into customers’ continuous delivery pipelines, saving precious time and effort in configurations and development. Users can use OpenAPI, python SDK or Intersight Terraform provider. This makes it easy to integrate IKS with customers’ existing Infrastructure as Code (IaC) strategies.

In addition, the Cisco Intersight Service for HashiCorp Terraform (IST) now makes it even simpler to securely integrate their on-prem environments and resources with their IaC plans – a result of our partnership with HashiCorp.

For many, however, the preferred way is to continuously deploy application Helm charts to the clusters. To address this requirement, another IKS feature we will be adding soon will be a Continuous Delivery toolkit for Helm charts, equipping customers with yet another mechanism to deploy and manage their application on their K8s platform.

Full-stack app visualization, AIOps rightsizing and intelligent top-down auto-scaling


Another important Intersight native service that IKS benefits from is Intersight Workload Optimizer (IWO). By installing the IWO agent helm chart on IKS tenant clusters, customers benefit from a comprehensive observability and automation toolkit for their K8s platforms, freeing them to focus on what matters: onboarding application teams and increasing K8s adoption.

Today IWO with IKS works in 3 ways:

◉ First, with IWO, customers can gain insights with interdependency mapping between K8s apps across virtual machines, servers, storage and networks, for simplified, automated troubleshooting and monitoring.

◉ Second, IWO allows DevOps teams to right-size K8s applications without the labor of manually pouring over the real-time traffic data patterns against configured limits, requests or namespace quota constraints, in order to identify the optimal CPU and memory thresholds for horizontal and vertical pod auto-scaler. Instead, IWO automatically detects thresholds based on user-configured policies.

◉ Finally, IWO enables intelligent, top-down auto-scaling – from the K8s app, to the cluster, to the infrastructure layer. Typically, DevOps teams use the Kubernetes default scheduler to handle fluctuating demand for their applications. While this is ok with the initial pod placement, it doesn’t help during the lifecycle of the pod, where actions might need to be taken due to node congestion or low traffic demand. IWO automatically and continuously redistributes IKS workloads and pods to mitigate that node congestion or optimize under-usetilized infrastructure. This results in better scaling decisions.

Source: cisco.com

Thursday 4 March 2021

Enable Consistent Application Services for Containers

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud. While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain top-level network and security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture.

F5 Container Ingress Services (CIS) and Cisco ACI

Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently, F5 and Cisco joined forces by integrating F5 CIS with Cisco ACI to bring L4-7 services into the Kubernetes environment, to further simplify the user experience in deploying, scaling, and managing containerized applications. This integration specifically enables:

◉ Unified networking: Containers, VMs, and bare metal

◉ Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies

◉ A single point of automation with enhanced visibility for ACI and BIG-IP.

◉ F5 Application Services natively integrated into Container and Platform as a Service (PaaS)Environments

One of the key benefits of such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it VLAN or VXLAN into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for an additional gateway, can communicate with any service anywhere.


Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Solution Deployment


To integrate F5 CIS with the Cisco ACI for the Kubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather than getting down to the nitty-gritty, I will just highlight the steps to deploy the joint solution.

Pre-requisites

The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place:

◉ A working Cisco ACI installation

◉ ACI must be integrated with vCenter VDS

◉ Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs.

◉ BIG-IP already running for non-container workload

Deploying Kubernetes Clusters to ACI Fabrics

The following steps will provide you a complete cluster configuration: 

Step 1. Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes

Cisco provides an acc_provision tool, to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. If needed, download the provisioning tool.

Next, you can use this provision tool to generate a sample configuration file that you can edit.

$ acc-provision--sample > aci-containers-config.yaml

We can now edit the sample configuration file to provide information from your network. With such a configuration file, now you can run the following command to provision the Cisco ACI fabric:

acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password]

Step 2. Prepare the ACI CNI Plugin configuration File

The above command also generates the file aci-containers.yaml that you use after installing Kubernetes.

Step 3. Preparing the Kubernetes Nodes – Set up networking for the node to support Kubernetes installation.

With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, configuring a static route for the multicast subnet, configuring the DHCP Client to work with ACI, etc.

Step 4. Installing Kubernetes cluster

After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment.

Step 5. Deploy Cisco ACI CNI plugin

When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command:

kubectl apply -f aci-containers.yaml

The command installs the following (PODs):

◉ ACI Containers Host Agent and OpFlex agent in a DaemonSet called aci-containers-host

◉ Open vSwitch in a DaemonSet called aci-containers-openvswitch

◉ ACI Containers Controller in a deployment called aci-containers-controller.

◉ Other required configurations, including service accounts, roles, and security context

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for the latest and greatest.

After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD’s.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Install the BIG-IP Controller


The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren’t familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable the enforcement of security policies.

Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller.

Use the kubectl get command to verify that the k8s-bigip-ctlr Pod launched successfully.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

BIG-IP as a north-south load balancer for External Services


For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode.

This is where BIG-IP Container Ingress Services (or CIS) comes into the picture, as the north-south load balancer. On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. Besides, each new POD is added to BIG-IP pool dynamically.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep