Saturday 7 March 2020

Cisco Brings the Power of the Cloud and AI to Contact Centers with Release 12.5

“I need the business agility, flexibility, and speed of new feature delivery that cloud offers while protecting my contact center investments.”

“I need to modernize my customer and agent experiences to remain competitive.”

“I need easy access to cloud-based applications that work seamlessly with my on-premises contact center infrastructure”

Do These Challenges Sound Familiar?


You’re not alone! Many of our customers across the globe and from many industries have shared with us their struggle to balance the need for innovation with cloud-based capabilities powered by artificial intelligence (AI) to stay competitive while maximizing their valuable on-premises contact center investments in people, process, and technology.

They’ve expressed their desire for an open and secure platform that gives them reliability and business continuity, with new flexibility and agility needed to meet the ever-changing demands of their business. And they’re looking for unique ways to create differentiated experiences for both their employees and customers that will result in better customer experiences, repeat business, and improved performance of their contact center.

Cisco is addressing these needs with Release 12.5 – our latest software for Unified Contact Center Enterprise, Packaged Contact Center Enterprise, Unified Contact Center Express, and Hosted Collaboration Solution for Contact Center.  We’re introducing some exciting new capabilities designed to simplify how you manage your contact center, make your agents more productive, and create better experiences for your customers.

Highlights of What’s New


◉ Webex Experience Management (formerly CloudCherry), our new customer experience management solution, is integrated into the Cisco agent desktop providing agents and supervisors with customer sentiment, journey insights, and feedback metrics in real-time.

◉ An intuitive conversational IVR, powered by Google Dialogflow improves customer self-service experiences over the phone by easily adding modern speech interfaces to existing self-service options.

◉ Customer Journey Analyzer, our cloud-based advanced analytics reporting solution is now available for trial to all our on-premises contact center customers.

◉ AI-based Voicea call transcript and summary are also available for trials to improve agent productivity, call wrap-up and accuracy of action items.

◉ Smart Licensing provides a simple, automated way to add/activate new software licenses to keep up with fluctuating interaction volumes.

Cisco Tutorial and Material, Cisco Prep, Cisco Exam, Cisco Certification Exam, Cisco Learning

Integration with Webex Experience Management (formerly CloudCherry)


Our new AI-powered, cloud-based customer experience solution can be integrated with your contact center via two new agent desktop gadgets. The solution enables contact centers to capture customer feedback utilizing an easy-to-use survey designer.  Once feedback is captured, agents have the ability to view customer feedback scores within their agent desktop via the new Customer Experience Journey gadget, giving them real-time visibility into customer sentiment and past journey experiences so they can truly understand how the customer is feeling and be able to personalize their interaction with the customer. The Customer Experience Analytics gadget displays the overall pulse of customer feedback through industry-standard metrics, such as NPS, CSAT, and CES.

Innovative AI-Powered Self-Service


Our Cisco Unified Customer Voice Portal leverages Google Dialogflow, allowing AI to bring speech-to-text, NLU-based intent detection, and text-to-speech capabilities to create an efficient conversational self-service experience for your customers while relieving agents of simple and repetitive tasks.

Cisco Tutorial and Material, Cisco Prep, Cisco Exam, Cisco Certification Exam, Cisco Learning

Business Insights via Cloud Analytics


Bringing the power of cloud analytics to all Cisco on-premises contact centers, Customer Journey Analyzer provides advanced out-of-the-box reporting, arming contact center managers with historical data from multiple contact center deployments to generate specific business views across the business.  It displays trends to help supervisors identify patterns and gain insights for making continuous improvements, and it includes an Abandoned Contacts dashboard to identify where customers are abandoning the journey so that appropriate and proactive actions can be taken. Available for trial now.

Voicea Call Transcript


We’ve created a new Cisco agent desktop gadget that uses our very own Voicea AI, leveraging accurate speech-to-text technology to provide a complete transcription of the interaction between agent and customer. This exciting new feature, which is available now for field trial with Cisco Unified Contact Center Enterprise, simplifies call wrap-up and helps agents accurately capture the details of the conversation, improving call continuity and agent productivity. 

Licensing Made Simple


Smart Licensing enables contact centers to remain agile and quickly add/activate new software licenses to keep up with fluctuating interaction volumes.  Using the Cisco Smart Software Management Portal, our customers can easily see in real-time how many total licenses they have and how many are in use, giving them peace of mind and an accurate measure of their license inventory.

Improved Agent Experience


We’re making your agents more productive and their experience more intuitive with new keyboard shortcuts, drag and drop desktop gadgets, and the ability to update call variables during interactions. Agents can also view their statistics in real-time now.

Secure and Scalable 


Our new release also includes a variety of security-related enhancements that further harden the solution against potential vulnerabilities.  At the same time, we continue stretching scale limits by doubling outbound calls per second and total supported dialer ports, and 2.5 times increase in simultaneous active campaigns.

All Our Customers Benefit


I’m excited about how the cloud brings all these enhancements to our on-premises customers. Our goal continues to be to bring all our customers the latest technological innovations available today, regardless of whether they own their contact center system or subscribe to it as a service, and to give them a practical and simple path to the cloud at a pace that’s just right for them.

Friday 6 March 2020

Head in the Clouds? A Milestone Towards Comprehensive Headset Management

Everything Started so Promisingly


If you’re an IT decision-maker who has purchased headsets for users, you know it can be an investment with one of the most uncertain returns: you buy them, distribute them, and then begins the challenge of tracking headsets and troubleshooting audio issues. A time-consuming chore that was supposed to be easy!

You met all the major headset vendors, tested their finest acoustic features, then, based on your budget and assumptions on the end-user preferences, you chose a certain mix of headsets for the corporate catalog or a bulk purchase.

Those vendors demonstrated their latest and greatest backend tools that allow you to collect usage data and track headsets. You could figure out the ROI of your headset investment and, most importantly, understand user preferences, so that next time you can make a data-driven decision in purchasing the right mix of devices.

A dream come true until you realize that:

◉ The data collected from the headset is inconsistent and partial:

     ◉ Users must have a client app running on their machine; since the app is considered useless by many, it ends up uninstalled, killed or removed from the startup list.

     ◉ Most of the time, the app works with PCs only, leaving out of the picture an ever-increasing headset usage with mobile devices.

◉ You may be paying for a service with limited scope (headset only) that doesn’t deliver an integrated view with rest of your collaboration platform

◉ You may be paying for a service with limited scope (headset only) that doesn’t deliver an integrated view with rest of your collaboration platform.

You are back to square one with no actionable insights, more overhead, and extra time spent managing a solution that does not meet all your needs. Cisco believes there is a better solution. Cisco believes there is a better solution! We are committed to leveraging the power of the Webex platform to deliver unprecedented headset management capabilities that solve the limitations of other vendors’ solutions.

Workplace Transformation Challenge


Ubiquitous connectivity, powerful mobile devices, and increasing adoption of soft clients foster the emergence of new workflows that are no longer tied to physical desks. More and more of us are becoming mobile-first workers who accomplish their daily tasks from anywhere, using a laptop or smartphone. In these scenarios, the headset is a critical element to enable high quality, crystal clear communication and collaboration in often noisy environments: open offices, coffee shops, train stations, buses, etc.

This modern, mobile work style throws up particular challenges for IT:

How do we make sure we collect the headset data we need in these dynamic scenarios?

How do we easily make relevant information readily available to support teams and business decision makers?

The perfect solution has to satisfy the following criteria:

1. It must work in any user scenario: anywhere, with both laptops, smartphones, and tablets.
2. It needs to collect the headset data automatically – without the complexity of managing headset client applications.
3. It needs to be part of the everyday toolset IT uses already – so that it is easily accessible.

If you are a Cisco on-premises customer, you may know that Cisco Unified Call Manager (CUCM) supports inventory, remote FW upgrade and remote configuration for Cisco Headsets connected to IP phones and Jabber soft clients. An unprecedented integration which satisfies the perfect solution criteria for companies in verticals characterized by more traditional workflows.

A perfect solution for on-premises customers that deserves to be extended to the Cloud!

Cisco Webex: Powerful Headset Management with Low IT Touch


Cisco Webex provides essential meetings, calling, and team collaboration for enterprises of all sizes, worldwide.  Webex Control Hub is Cisco’s single pane of glass management for cloud and hybrid services. We are excited to announce the release of headset inventory management in Control Hub; a capability that, along with remote firmware upgrade through Webex Teams, represents a solid foundation in building the most comprehensive headset management solution in the market.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Materials, Cisco Exam Prep, Cisco Prep

IT can buy any Cisco Headsets, 500 and 700 Series (limited tracking abilities on third-party headsets). Once distributed to users and plugged-in or paired to a laptop (Mac or PC) running Webex Teams, the headsets appear in Control Hub in the devices section (along with the rest of the collaboration portfolio) showing relevant inventory information, such as connection status, connection history, firmware version, last user and more. The inventory is dynamically and automatically generated and available now to all Webex customers at no extra charge!

Cisco Tutorial and Material, Cisco Learning, Cisco Study Materials, Cisco Exam Prep, Cisco Prep

Meeting Criteria for Successful Headset Management


Earlier, we introduced criteria that define the perfect headset management solution. Let’s see how the headset management in Webex Control Hub performs in that framework:

◉ Aomatic from the end user

It’s enough to collaborate using Webex Teams to generate data. No actions or time spent on the user side.

◉ Works with the tools IT uses daily

Admins already use Control Hub, and these new capabilities extend its overall value.

◉ Works in any user scenario

Cisco will support headset management on a range of devices and modes of collaboration. The team is currently working on enabling inventory and remote firmware upgrade through the rest of the Cloud soft clients: Webex Teams mobile app, Webex Meetings desktop, Webex Meetings mobile.

Path Towards Realizing a Full ROI


Headset management in Control Hub represents an important milestone towards the maximization of the headset returns. Today, IT Admins can track their headsets throughout their lifecycle.

Soon, it won’t matter whether a customer:

◉ Is deployed on-premises only, Cloud only, or hybrid.
◉ Uses IP phones, desk video devices, soft clients or any mix of them.
◉ Supports mobile workers, desk workers or both.

Headset management will work across any possible customer scenarios!

The Cisco Collaboration engineering team is developing additional capabilities, which will allow diagnosing communication issues, configure headsets remotely, unveil usage patterns/preferences and more, hence, unveiling unprecedented insights that finally provide IT decision-makers with the information required to optimize future headset investments.

Thursday 5 March 2020

Unify NetOps and SecOps with SD-WAN Cloud Management

Cisco Prep, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Exam Prep

CIOs know that ubiquitous connectivity across domains—campus, branch, cloud, and edge, wired or wireless—is a baseline requirement for building a digital enterprise. But, as CISOs know, as the network fabric spreads to encompass devices and location-agnostic data and compute resources, the need for end-to-end integrated security is equally paramount. Add in the necessity to continuously monitor and maintain application performance throughout campus and branch and edge locations and you create an enormous workload for NetOps and SecOps teams that are simultaneously dealing with static CapEx and OpEx budgets. Often the result is a tug-of-war between the teams: one striving to keep the network optimized for performance and availability, the other striving to keeping data, applications, and devices secure.

Conflict or Collaboration?


The problem of balancing the goals of NetOps with SecOps has a lot to do with how the network and all the connected devices and domains are being managed. Traditionally in NetOps, there have been separate consoles and Unified Computing Servers (UCS) to configure, monitor and analyze network domains – several for the data center, multiple for the campus wireless network, and still more for cloud, branch, and edge deployments.

Similarly, in order for SecOps to capture, log, and analyze traffic in all the various domains, special taps are installed where traffic is entering and leaving the domains. SecOps has an additional burden of storing all the traffic logs in case of a breach or successful malware attack in order to pinpoint the cause and prove appropriate steps are taken to remediate breaches and prevent future attacks.

That’s a lot of boxes to buy, install, and securely manage—a number that grows with each expansion of the enterprise network. Ironically, the extra compute devices needed by SecOps ultimately have to be managed by NetOps to ensure they do not affect overall network performance. Thus, more conflict.

Can NetOps and SecOps get to the point of collaboration instead of conflict? In fact, new cross-enterprise business initiatives make collaboration a necessity.

Digital Transformation Projects Benefit from Unified Operations and Security


As organizations seek new ways to connect with customers, suppliers, and service partners by making business processes personal and frictionless, they initiate application development efforts that span across operations. A unifying foundation for these development efforts are the NetOps and SecOps teams.

Deploying new multi-cloud applications or moving processes to the edge—retail outlets, branch offices, medical clinics—requires assurance that the network is responsive, always available, and secure. NetOps needs to work with Development teams to understand network SLAs and cloud usage requirements for the new apps. SecOps needs to ensure that the proper network permissions, segmentations, and polices are applied to the network at application launch time. NetSecOps collaboration is key to timely deployment of next-generation applications with security and the required levels of performance.

Collaboration is important too in the battle of the budgets. With IT budgets generally flat over the last few years, making sure NetOps and SecOps teams use both CapEx and OpEx funds judiciously is critical for maximum efficiency. There is an opportunity to combine NetOps and SecOps teams to generate the most value from the available budget, equipment, and knowledge of how an enterprise’s unique network responds to changes in applications and threats.

From these examples, you can see that unifying NetOps and SecOps has solid benefits for enterprise digital transformation efforts. Is there a technology platform that makes unification not only possible, but also makes the transition a natural evolution rather than a forced organizational change? By combining a software-defined network fabric with single-console cloud management, SD-WAN can play a significant role in the unification of NetSecOps.

SD-WAN Unified Network Cloud Management for NetSecOps


A primary benefit of Cisco SD-WAN powered by Viptela for NetSecOps is the ability to provide a single, role-based interface in Cisco vManage to control network performance, segmentation, and security. Through the lens of vManage, NetSecOps can:

◉ Install and configure branch SD-WAN routers remotely with Zero Touch Provisioning (ZTP)

◉ Automatically route traffic through the most efficient and cost-effective path (MPLS, broadband, direct internet, LTE/5G) using dynamic path selection.

◉ Manage performance, security, and access policies for cloud onramps to SaaS, IaaS, and colocations.

◉ Remotely configure and manage at the branch level the application-aware firewalls, URL-filtering, intrusion detection/prevention, DNS-layer security, and Advanced Malware Protection (AMP) to secure branch traffic that is using direct internet connections to SaaS applications.

◉ Drawing on policies set up in Cisco SD-Access and Identity Services Engine (ISE), NetSecOps can collaborate to configure segmentation rules that are uniformly applied across distributed locations to keep traffic separated—such as employee wireless access from payment system traffic—improving performance and security.

Cisco Prep, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Exam Prep

These are some of the benefits SD-WAN provides to a unified NetSecOps team. One console—vManage—to configure, monitor, and protect a distributed organization’s branches, remote workforce, and applications. Let’s double-click on two common yet difficult to manage situations—securing east-west branch traffic and accessing direct internet access SaaS/IaaS-hosted applications—to see how SD-WAN helps a unified NetSecOps team operate.

Managing and Protecting East-West Traffic Flow and Security in Branches

With the plethora of integrated security layers that comes with Cisco SD-WAN, traffic entering and leaving a branch is thoroughly inspected for application infiltration, intrusion by malware, and accessing known bad URLs. But there is still the tricky problem of when malware is introduced by a device or someone inside the branch network.

In the days of spoke and hub WANs, traffic from each device within a branch would be backhauled to the enterprise data center for inspection and verification, and then back to the branch. This has always been a troublesome scenario for NetOps as the traffic load for just backhauling and inspecting interfered with traffic that legitimately had to go the data center for additional processing. The alternative, of course, was to lock down all the endpoints in branches, limiting their flexibility and any options to BYOD for employees.

Securing Access to SaaS Applications via Direct Internet Connections

The workforce is quickly becoming more dependent on applications hosted in SaaS cloud platforms, such as Office 365, which require routing through direct internet access. With SD-WAN, NetSecOps can focus on not just fine-tuning application performance but also the defenses that secure the valuable corporate data being transmitted over the internet connections to and from branch sites. By using Cisco SD-WAN Cloud OnRamps to SaaS and IaaS clouds, the network selects the path that is the most effective to handle Azure, AWS, or Google Cloud workloads while the built-in layers of security provide protection with DNS URL filtering, advanced malware protection, and application-aware firewalls. Both application performance and security are managed by NetSecOps via the SD-WAN vManage cloud controller portal.

Fostering Collaboration Among NetOps and SecOps is Key to Network Agility


With Cisco SD-WAN’s ability to manage operations and security via the same cloud portal, it really is achievable to create a NetSecOps team that promotes collaboration, reduces CapEx and OpEx, and maximizes device and application QoE and security. Unifying these two critical functions helps create an agile network that makes digital transformation projects possible while keeping on top of advanced security threats. I’d like to hear your thoughts on the ways SD-WAN can provide better synergy between operations and security.

Tuesday 3 March 2020

An Introduction Into Kubernetes Networking – Part 4

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

Rule based routing


The final topic that we’ll cover in this series is rule based routing (HTTP hosts and paths) using the Kubernetes Ingress. An Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

“Ingress can provide load balancing, SSL termination and name-based virtual hosting.”

https://kubernetes.io/docs/concepts/services-networking/ingress/

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

There are a number of ways to configure a Kubernetes Ingress and in this example we’ll use a fanout. A fanout configuration routes traffic from a single IP address to more than one service.

From the YAML file above we can see a set of rules defining two http paths, one to a Guestbook and one to a different application called the Sockshop.

Kubernetes Ingress Controller


Just like we learnt Kubernetes Services require an external loadbalancer, a Kubernetes Ingress itself does not provide the rule based routing. Instead it relies on an Ingress Controller to perform this function.

There are many ingress controller options available. In our lab we are using Cisco Container Platform which automatically deploys an Nginx Ingress Controller to each new Kubernetes tenant cluster.

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

In the following screenshots you’ll see that we have an ingress controller, nginx-ingress-controller-xxxxx, running on each node. We also have a service of type LoadBalancer which will direct our incoming traffic into the Nginx controller.

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

Similar to how MetalLB worked for Kubernetes Services, the Nginx controller will look for any changes to the Kubernetes Ingress. When a new ingress is configured the Nginx configuration will be updated with the routing rules which have been configured in the ingress YAML file (see above for example YAML).

Each ingress controller also has options to provide annotations for custom configuration of the specific controller. For example here are the Nginx annotations you can use.

https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

In this lab Nginx has noticed the new ingress defined and has created the routing rules and annotions as part of it’s configuration. We can confirm this by looking at the nginx.conf file on each nginx-ingress-controller-xxxxx pod.

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

Since the ingress controller is running in multiple pods we can use the Kubernetes Services outlined above to provide access. In our case we have a LoadBalancer type service configured to direct external traffic to one of the available Nginx controller pods.

From there the Nginx controller will redirect based on the path, either to the guestbook frontend service or the sockshop service. These will in turn forward the traffic onto an available pod managed by the respective service.

Cisco Tutorial and Material, Cisco Guides, Cisco Certifications, Cisco Prep

Why should I use an ingress?


Besides the routing rules that we’ve just described, a Kubernetes Ingress allows us to conserve IP addresses. When we use a service of type LoadBalancer we require an externally routable address for each service configured. Assigning these addresses on premises may not have a big impact however usually there is a cost associated to each IP address in a public cloud environment.

When using an ingress we can have a single external IP address assigned (for the ingress service), and each service behind the ingress can use a ClusterIP. In this scenario the services are only accesible through the ingress and therefore don’t require a public IP address.

As we’ve just alluded to the Kubernetes Ingress also provides a single ingress point to which we can define our routing rules and other configuration such as TLS termination.

Monday 2 March 2020

An Introduction Into Kubernetes Networking – Part 3

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

Tracking Pods and Providing External Access


In the previous section we learnt how one pod can talk directly to another pod. What happens though if we have multiple pods all performing the same function, as is the case of the guestbook application. Guestbook has multiple frontend pods storing and retrieving messages from multiple backend database pods.

◉ Should each front end pod only ever talk to one backend pod?

◉ If not, should each frontend pod have to keep its own list of which backend pods are available?

◉ If the 192.168.x.x subnets are internal to the nodes only and not routeable in the lab as previously mentioned, how can I access the guestbook webpage so that I can add my messages?

All of these points are addressed through the use of Kubernetes Services. Services are a native concept to Kubernetes, meaning they do not rely on an external plugin as we saw with pod communication.

There are three services we will cover:

◉ ClusterIP
◉ NodePort
◉ LoadBalancer

We can solve the following challenges using services.

◉ Keeping track of pods
◉ Providing internal access from one pod (e.g. Frontend) to another (e.g. Backend)
◉ Providing L3/L4 connectivity from an external client (e.g. web browser) to a pod (e.g. Frontend)

Labels, Selectors, and Endpoints


Labels and selectors are very important concepts in Kubernetes and will be relevant when we look at how to define services.

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

◉ “Labels are key/value pairs that are attached to objects, such as pods [and] are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users. Unlike names and UIDs, labels do not provide uniqueness. In general, we expect many objects to carry the same label(s)”

◉ “Via a label selector, the client/user can identify a set of objects”

Keeping track of pods


Here is the deployment file for the Guestbook front-end pods.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

As you can see from the deployment, there are two labels, “app: guestbook” and “tier: frontend“, associated to the frontend pods that are deployed. Remember that these pods will receive an IP address from the range 192.168.x.x

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

Here is the first service created (ClusterIP). From this YAML output we can see the service has a selector and has used the same key/value pairs (“app: guestbook” and “tier: frontend“) as we saw in our deployment above.

When we create this service, Kubernetes will track the IP addresses assigned to any of the pods that use these labels. Any new pods created will automatically be tracked by Kubernetes.

So now we’ve solved the first challenge. If we have 100s of frontend pods deployed do we need to remember the individually assigned pod addresses (192.168.x.x)?

No, Kubernetes will take care of this for us using services, labels, and selectors.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

Providing internal access from one pod (e.g. Frontend) to another (e.g. Backend)


Now we know Kubernetes tracks pods and its associated IP address. We can use this information to understand how our frontend pod can access any one of the available backend pods. Remember each tier could potentially have 10, 100s or even 1000s of pods.

If you look at the pods or processes running on your Kubernetes nodes you won’t actually find one named “Kubernetes Service”. From the documentation below, “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”

https://kubernetes.io/docs/concepts/services-networking/service/

So while the Kubernetes Service is just a logical concept, the real work is being done by the “kube-proxy” pod that is running on each node.

Based on the documentation in the link above, the “kube-proxy” pod will watch the Kubernetes control plane for changes. Every time it sees that a new service has been created, it will configure rules in IPTables to redirect traffic from the ClusterIP (more on that soon) to the IP address of the pod (192.168.x.x in our example).

*** IMPORTANT POINT: *** We’re using IPTables however please see the documentation above for other implementation options

What is the ClusterIP?


The Kubernetes ClusterIP is an address assigned to the service which is internal to the Kubernetes cluster and only reachable from within the cluster.

If you’re using Kubeadm to deploy Kubernetes then the default subnet you will see for the ClusterIP will be 10.96.0.0/12

https://github.com/kubernetes/kubernetes/blob/v1.17.0/cmd/kubeadm/app/apis/kubeadm/v1beta2/defaults.go#L31-L32

So joining the pieces together, every new service will receive an internal only ClusterIP (e.g. 10.101.156.138) and the “kube-proxy” pod will configure IPTables rules to redirect any traffic destined to this ClusterIP to one of the available pods for that service.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

DNS services


Before we continue with services, it’s helpful to know that not only do we have ClusterIP addresses assigned by Kubernetes, we also have DNS records that are configured automatically. In this lab we have configured CoreDNS.

“Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.”

“Every Service defined in the cluster . . . is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain.

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

When we deploy the backend service, not only is there an associated ClusterIP address but we also now have a record, “backend.default.svc.cluster.local”. “Default” in this case being the name of the Kubernetes namespace in which the backend pods run. Since every container is configured to automatically use Kubernetes DNS, the address above will resolve correctly.

Bringing this back to our example above, if the frontend pod needs to talk to the backend pods and there are many backend pods to choose from, we can simply reference “backend.default.svc.cluster.local” in our applications code and this will resolve to the ClusterIP address which is then translated to one of the IP addresses of these pods (192.168.x.x)

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

NAT


We previously learnt in the pod to pod communication section that Kubernetes requires network connectivity be implemented without the use of NAT.

This is not true for services.

As mentioned above, when new services are created IPTables rules are configured which translate from the ClusterIP address to the IP address of the backend pod.

When traffic is directed to the service ClusterIP, the traffic will use Destination NAT (DNAT) to change the destination IP address from the ClusterIP to the backend pod IP address.

When traffic is sent from a pod to an external device, the pod IP Address in the source field is changed (Source NAT) to the nodes external IP address which is routeable in the upstream network.

Providing L3/L4 connectivity from an external client (e.g. web browser) to a pod (e.g. Frontend)


Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

So far we’ve seen that Kubernetes Services continuously track which pods are available and which IP addresses they use (labels and selectors). Services also assign an internal ClusterIP address and DNS record as a way for internal communications to take place (e.g. frontend to backend service)

What about external access from our web browser to the frontend pods hosting the guestbook application?

In this last section covering Kubernetes Services we’ll look at two different options to provide L3/L4 connectivity to our pods.

NodePorts


Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

As you can see from the service configuration, we have defined a “kind: Service” and also a “type: NodePort“. When we configure the NodePort service we need to specify a port (default is between 30000-32767) to which the external traffic will be sent. We also need to specify a target port on which our application is listening. For example we have used port 80 in the guestbook application.

When this service has been configured we can now send traffic from our external client to the IP address of any worker nodes in the cluster and specify the NodePort we have chosen (<NodeIP>:<NodePort>).

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

In our example we could use https://10.30.1.131:32222 and have access to the guestbook application through a browser.

Kubernetes will forward this traffic to one of the available pods on the specified target port (in this case frontend pods, port 80).

Under the hood Kubernetes has configured IPTables rules to translate the traffic from our worker node IP address/NodePort to our destination pod IP address/port. We can verify this by looking at the IPTables rules that have been configured.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

LoadBalancer


Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

The final topic in the Kubernetes Services section will be the LoadBalancer type which exposes the service externally using either a public cloud provider or an on premises load balancer.

https://kubernetes.io/docs/concepts/services-networking/service/

Unlike the NodePort service, the LoadBalancer service does not use the IP address from the worker nodes. Instead it relies upon an address selected from a pool that has been configured.

This example uses the Cisco Container Platform (CCP) to deploy the tenant clusters and CCP automatically installs and configures MetalLB for the L3/L4 loadbalancer. We have also specified a range of IP addresses that can be used for the LoadBalancer services.

“MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.”

https://metallb.universe.tf/

As you can see from YAML above, we configure the service using “type: LoadBalancer” however we don’t need to specify a NodePort this time.

When we deploy this service, MetalLB will allocate the next available IP address from the pool of addresses we’ve configured. Any traffic destined to the IP will be handled by MetalLB and forwarded onto the correct pods.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

We can verify that MetalLb is assigning IPs correctly by looking at the logs.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Networking

Sunday 1 March 2020

An Introduction Into Kubernetes Networking – Part 2

2. Pod-to-Pod Communications


In the subsequent topics we will move away from the two-container pod example and instead use the Kubernetes Guestbook example. The Guestbook features a frontend web service (PHP and Apache), as well as a backend DB (Redis Master and Slave) for storing the guestbook messages.

Cisco ACI, DevNet, Kubernetes, Network Programming, Cisco Prep, Cisco Guides

Before we get into pod-to-pod communication, we should first look at how the addresses and interfaces of our environment have been configured.

◉ In this environment there are two worker nodes, worker 1 and worker 2, where the pods from the Guestbook application will run.

◉ Each node receives it’s own /24 subnet, worker 1 is 192.168.1.0/24 and worker 2 is 192.168.2.0/24.

◉ These addresses are internal to the nodes; they are not routable in the lab.

Cisco ACI, DevNet, Kubernetes, Network Programming, Cisco Prep, Cisco Guides

*** IMPORTANT POINT: ***  Every Kubernetes pod receives its own unique IP address. As we previously saw, you can have multiple containers per pod. This means that all containers in a pod share the same network namespace, IP address and interfaces.

Network Namespaces

Kubernetes and containers rely heavily on Linux namespaces to separate resources (processes, networking, mounts, users etc) on a machine.

“Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources.”

“Network namespaces virtualize the network stack.

Each network interface (physical or virtual) is present in exactly 1 namespace and can be moved between namespaces.

Each namespace will have a private set of IP addresses, its own routing table, socket listing, connection tracking table, firewall, and other network-related resources.”


If you come from a networking background the easiest way to think of this is like a VRF and in Kubernetes each pod receives its own networking namespace (VRF).

Additionally, each Kubernetes node has a default or root networking namespace (VRF) which contains for example the external interface (ens192) of the Kubernetes node.

*** IMPORTANT POINT: *** Linux Namespaces are different from Kubernetes Namespaces. All mentions in this post are referring to the Linux network namespace.

Virtual Cables and Veth Pairs

In order to send traffic from one pod to another we first need some way to exit the pod. Within each pod exists an interface (e.g. eth0). This interface allows connectivity outside the pods network namespace and into the root network namespace.

Just like in the physical world you have two interfaces, one on the server and one on the switch, in Kubernetes and Linux we also have two interfaces. The eth0 interface resides in our pod and we also have a virtual ethernet (veth) interface that exists in the root namespace.

Instead of a physical cable connecting a server and switchport, we can think similarly of these two interfaces but this time connected by a virtual cable. This is known as a virtual ethernet (veth) device pair and allows connectivity outside of the pods.

Cisco ACI, DevNet, Kubernetes, Network Programming, Cisco Prep, Cisco Guides

The next step is to understand how the veth interfaces connect upstream. This is determined by the plugin in use and for example may be a tunneled interface or a bridged interface.

*** IMPORTANT POINT: *** Kubernetes does not manage the configuration of the pod-to-pod networking itself, rather it outsources this configuration to another application, the Container Networking Interface(CNI) plugin.

“A CNI plugin is responsible for inserting a network interface into the container network namespace (e.g. one end of a veth pair) and making any necessary changes on the host (e.g. attaching the other end of the veth into a bridge). It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.”

https://github.com/containernetworking/cni/blob/master/SPEC.md#overview-1

CNI plugins can be developed by anyone and Cisco have created one to integrate Kubernetes with ACI. Other popular plugins include Calico, Flannel, and Contiv, with each implementing the network connectivity in their own way.

Although the methods of implementing networking connectivity may differ between CNI plugins, every one of them must abide by the following requirements that Kubernetes imposes for pod to pod communications:

◉ Pods on a node can communicate with all pods on all nodes without NAT

◉ Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node

◉ Pods in the host network of a node can communicate with all pods on all nodes without NAT

https://kubernetes.io/docs/concepts/cluster-administration/networking/

*** IMPORTANT POINT: *** Although pod to pod communication in Kubernetes is implemented without NAT, we will see NAT rules later when we look at Kubernetes services

What is a CNI Plugin?


A CNI plugin is in fact just an executable file which runs on each node and is located in the directory, “/opt/cni/bin”. Kubernetes runs this file and passes it the basic configuration details which can be found in “/etc/cni/net.d”.

Once the CNI plugin is running, it is responsible for the network configurations mentioned above.

To understand how CNI plugins implement the networking for Kubernetes pod to pod communications we will look at an example, Calico.

Calico

Calico has a number of options to configure Kubernetes networking. The one that we’ll be looking at today is using IPIP encapsulation however you could also implement unencapsulated peering, or encapsulated in VXLAN. See the following document for further details on these options.

https://docs.projectcalico.org/networking/determine-best-networking

There are two main components that Calico uses to configure networking on each node.

◉ Calico Felix agent

The Felix daemon is the heart of Calico networking. Felix’s primary job is to program routes and ACL’s on a workload host to provide desired connectivity to and from workloads on the host.

Felix also programs interface information to the kernel for outgoing endpoint traffic. Felix instructs the host to respond to ARPs for workloads with the MAC address of the host.

◉ BIRD internet routing daemon

BIRD is an open source BGP client that is used to exchange routing information between hosts. The routes that Felix programs into the kernel for endpoints are picked up by BIRD and distributed to BGP peers on the network, which provides inter-host routing.

https://docs.projectcalico.org/v3.2/reference/architecture/components

Now that we know that Calico programs the route table and creates interfaces we can confirm this in the lab.

Cisco ACI, DevNet, Kubernetes, Network Programming, Cisco Prep, Cisco Guides

As you can see there are a few interfaces that have been created:

◉ ens192 is the interface for external connectivity outside of the node. This has an address in the 10.30.1.0/24 subnet which is routable in the lab

◉ tunl0 is the interface we will see shortly and provides the IPIP encapsulation for remote nodes

◉ calixxxxx are the virtual ethernet interfaces that exist in our root namespace. Remeber from before that this veth interface connects to the eth interface in our pod

*** IMPORTANT POINT: *** As previously mentioned this example is using Calico configured for IPIP encapsulation. This is the reason for the tunnelled interface (tunl0). If you are using a different CNI plugin or a different Calico configuration you may see different interfaces such as docker0, flannel0, or cbr0

If you look at the routing table you should see that Calico has inserted some routes. The default routes direct traffic out the external interface (ens192), and we can see our 192.168 subnets.

We’re looking at the routing table on worker 1 which has been assigned the subnet 192.168.1.0/24. We can see that any pods on this worker (assigned an IP address starting with 192.168.1.x) will be accessible via the veth interface, starting with calixxxxx.

Any time we need to send traffic from a pod on worker 1 (192.168.1.x) to a pod on worker 2 (192.168.2.x) we will send it to the tunl0 interface.

As per the this document, “when the ipip module is loaded, or an IPIP device is created for the first time, the Linux kernel will create a tunl0 default device in each namespace”

Another useful link points out, “with the IP-in-IP ipipMode set to Always, Calico will route using IP-in-IP for all traffic originating from a Calico enabled host to all Calico networked containers and VMs within the IP Pool”

https://docs.projectcalico.org/v3.5/usage/configuration/ip-in-ip

So how does Calico implement pod to pod communication and without NAT?


Based on what we’ve learnt above, if it’s pod to pod communication on the same node it will send packets to the veth interfaces.

Traffic between pods on different worker nodes will be sent to the tunl0 interface which will encapsulate these packets with an outer IP packet. The source and destination IP addresses for the outer packet will be our external, routable addresses (10.30.1.x subnet).

*** IMPORTANT POINT: *** A reminder that in this example we’re using IPIP encapsulation with Calico however it could also be implemented using VXLAN.

Cisco ACI, DevNet, Kubernetes, Network Programming, Cisco Prep, Cisco Guides

We can confirm this encapsulation is taking place by capturing the packets from the ens192 external interface. As you can see from the screenshot, when we send traffic from Frontend Pod 1 (192.168.1.24) to Frontend Pod 2 (192.168.2.15), our inner packets are encapsulated in an outer packet containing the external source and destination addresses of the ens192 interfaces (10.30.1.131 and 10.30.1.132).

Since the 10.30.1.0/24 subnet is routable in the lab, we can send the packets into the lab network and they will eventually find their way from worker 1 to worker 2. Once they’re at worker 2 they will be decapsulated and sent onto the local veth interface connecting to the Frontend Pod 2.

Saturday 29 February 2020

An Introduction Into Kubernetes Networking – Part 1

Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

Cisco Live Barcelona recently took place and there was a lot of focus on Kubernetes, including the launch of the Cisco Hyperflex Application Platform(HXAP). Cisco HXAP delivers an integrated container-as-a-service platform that simplifies provisioning and ongoing operations for Kubernetes across cloud, data center, and edge.

With every new technology comes a learning curve and Kubernetes is no exception.

1. Container to Container Communications


The smallest object we can deploy in Kubernetes is the pod, however within each pod you may want to run multiple containers. A common usecase for this is a helper where a secondary container helps a primary container with tasks such as pushing and pulling data.

Container to container communication within a K8s pod uses either the shared file system or the localhost network interface.

We can test this by using the K8s provided example, two-container-pod, and modifying it slightly.

https://k8s.io/examples/pods/two-container-pod.yaml

When we deploy this pod we can see two containers, “nginx-container” and “debian-container“. I’ve created two separate options to test, one with a shared volume, and one without a shared volume but using localhost instead.

Shared Volume Communication


Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

When we use the shared volume, Kubernetes will create a volume in the pod which will be mapped to both containers. In the “nginx-container”, files from the shared volume will map to the “/usr/share/nginx/html” directory, while in the “debian-container” files will map to the “/pod-data” directory. When we update the “index.html” file from the Debian container, this change will also be reflected in our Nginx container, thereby providing a mechanism for our helper (Debian) to push and pull data to and from Nginx.

Localhost Communication


Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

In the second scenario shared volume has been removed from the pod and a message has been written in the “index.html” file which only resides in the Nginx container. As previously mentioned, the other method for multiple containers to communicate within a pod is through the localhost interface and the port number to which they’re listening.

In this example Nginx is listening on port 80, therefore when we run the “curl https://localhost” command from the Debian container we can see that the “index.html“ page is served back to us from Nginx.

Here’s the “nginx-container” showing the contents of the “index.html” file.

Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes

Confirmation that we’re receiving the file when we Curl from the “debian-container”

Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning, Cisco Kubernetes